Linear system of equations with non-trivial solutions

converge
Messages
2
Reaction score
0

Homework Statement



Let

(**)


\begin{matrix}
a_{11}x_1 & + & \ldots & + & a_{1n}x_n & = 0 \\
\vdots & & & & \vdots\\
a_{m1}x_1 & + & \ldots & + & a_{mn}x_n & = 0 \\
\end{matrix}


be a system of m linear equations in n unknowns, and assume that
n > m. Then the system has a non-trivial solution.

Proof:

Consider first the case of one equation in n unknowns, n > 1:

a_{1}x_1 + \ldots + a_{n}x_n = 0
If all coefficients a_1, \ldots, a_n are equal to 0, then any value of the variables
will be a solution, and a non-trivial solution certainly exists. Suppose
that some coefficient a_i \neq 0. After renumbering the variables and the
coefficients, we may assume that it is a_i Then we give x_2, \ldots, x_n arbitrary
values, for instance we let x_2, \ldots, x_n = 1 and solve for x_1 letting

x_1 = \frac {-1}{a_1} (a_2 + \ldots + a_n)

In that manner, we obtain a non-trivial solution for our system of equations.
Let us now assume that our theorem is true for a system of m - 1
equations in more than m - 1 unknowns. We shall prove that it is true
for m equations in n unknowns when n > m. We consider the system
(**).

If all coefficients (a_{ij}) are equal to 0, we can give any non-zero value
to our variables to get a solution. If some coefficient is not equal to 0,
then after renumbering the equations and the variables, we may assume
that it is a_{11}. We shall subtract a multiple of the first equation from the
others to eliminate x_1. Namely, we consider the system of equations

(A_2 - \frac {a_{21}}{a_{11}} A_1) \cdot X

(A_m - \frac {a_{m1}}{a_{11}} A_1) \cdot X


Which can be written also in the form

(***)

A_2 \cdot X - \frac {a_{21}}{a_{11}} A_1 \cdot X = 0

\vdots

A_m \cdot X - \frac {a_{m1}}{a_{11}} A_1 \cdot X = 0

In this system, the coefficient of x_1 is equal to 0. Hence we may view
(***)as a system of m - 1 equations in n - 1 unknowns, and we have
n-1 > m-1.

According to our assumption, we can find a non-trivial solution
(x_2, ... ,x_n) for this system. We can then solve for x_1 in the first equation,
namely

x_1 = \frac {-1}{a_{11}} (a_{12}x_2 + \ldots + a_{1n}x_n).

In that way, we find a solution of A_1 \cdot X = 0. But according to (***), we
have


A_i \cdot X = \frac {a_{i1}}{a_{11}} A_1 \cdot X

for i = 2, \ldots ,m. Hence A_i \cdot X = 0 for i = 2, \ldots ,m, and therefore we have found a non-trivial solution to our original system (**). The argument we have just given allows us to proceed stepwise from
one equation to two equations, then from two to three, and so forth.
This concludes the proof.


Homework Equations


The Attempt at a Solution




"We shall subtract a multiple of the first equation from the others to eliminate x_1."

Why do we eliminate x_1 here? I mean we do that when solving non-general systems of equations. But here it looks like we could obtain the non-trivial solutions even with (A_2 - A_1) \cdot X = 0. Is it because subtracting the first equation from the second without eliminating any variables would just be a random, arbitrary operation?

Also, when solving the general system of equations, why do we eliminate only 1 variable, namely, x_1, as opposed to non-general cases where we can usually eliminate more than 1 variable? I can see how eliminating only x_1 brings us to the solution in the proof, but is that the only reason?

Thanks.
 
Physics news on Phys.org
It's a proof by mathematical induction. That strategy involves (a) showing that a proposition is true when m = 1, and (b) showing that IF the proposition is true for any integer represented as "m - 1" then it will also be true for the successor, m. Together, (a) and (b) imply that the proposition is true for all integers m.

In your problem, the reason why only one unknown is being eliminated is that exactly one equation is also being eliminated: the intent is to subtract a multiple of the 1st equation from EACH of the others, so that one has "m - 1" equations in "n - 1" unknowns. This is the manner of asserting (b) above.

If you're not familiar with mathematical induction, or with Fermat's method of "infinite descent," read about them elsewhere. The style of the proof you presented is more like the Fermat style of proof than the typical inductive proof.
 
I am familiar with induction from elementary set theory. This is my first brush with Linear Algebra and this proof is first of its kind I've seen. Like you said it doesn't look very much like a typical induction proof. I am glad you mentioned Fermat's method of "infinite decent". Never heard of it. Will look it up.

I appreciate your explanation of why we eliminate only one variable. Spent the whole day asking everywhere. Didn't get any answers.

I think I see where my confusion stems from. If we subtract one equation from another we are going to have only one equation as a result, not two. I somehow missed this by focusing too much on elimination of $x_1$. So, say, we have 5 equations in 6 unknowns. Removing one unknown by subtraction also removes one equation.

Cool. Thank you.
 
There are two things I don't understand about this problem. First, when finding the nth root of a number, there should in theory be n solutions. However, the formula produces n+1 roots. Here is how. The first root is simply ##\left(r\right)^{\left(\frac{1}{n}\right)}##. Then you multiply this first root by n additional expressions given by the formula, as you go through k=0,1,...n-1. So you end up with n+1 roots, which cannot be correct. Let me illustrate what I mean. For this...
Back
Top