Proving homogeneous systems of linear equations have infinite solutions

Click For Summary

Homework Help Overview

The discussion revolves around proving that a homogeneous system of linear equations can have infinite solutions, particularly focusing on the implications of the determinant of the coefficient matrix being zero. The subject area is linear algebra, specifically dealing with systems of equations and their solutions.

Discussion Character

  • Exploratory, Conceptual clarification, Mathematical reasoning

Approaches and Questions Raised

  • Participants explore the implications of having two distinct solutions to a linear system, questioning how this leads to a homogeneous system. There is discussion on the relationship between the determinant of the coefficient matrix and the existence of infinite solutions. Some participants suggest examining the kernel of the coefficient matrix and its implications for the solution set.

Discussion Status

Participants are actively engaging with the problem, offering insights and partial guidance on how to approach the proof. There is a recognition of the need to generalize the findings to larger systems, and some participants are considering abstract proofs involving vector spaces and dimensions.

Contextual Notes

There are indications of uncertainty regarding how to symbolically express generalizations and the conditions under which the determinant leads to infinite solutions. Participants are also navigating the constraints of the homework requirements and the need for clarity in their proofs.

Elzair
Messages
10
Reaction score
0
Proving a linear system of equations cannot have more than one finite solutions

Homework Statement



Prove that the number of solutions to a linear system can not be a finite number larger than one. Provide either a general proof or for a system with two equations and two unknowns.

Homework Equations



It is mostly simple algebra.

The Attempt at a Solution



Assume the following:
a_{11}x_{1}+a_{12}x_{2}=b_{1} NOTE: If the tex messed up, this reads a11 * x1 + a12 * x2 = b1
a_{21}x_{1}+a_{22}x_{2}=b_{2} and a21 * x1 + a22 * x2 = b2

There exists two unique solution sets to the system: {c_{1}, c_{2}} (NOTE: this reads {c1, c2}) and {d_{1}, d_{2}} are solutions, and c_{1} \neq d_{1}, c_{2} \neq d_{2}.

.:.

a_{11}c_{1}+a_{12}c_2}=b_{1}=a_{11}d_{1}+a_{12}d_2}
a_{21}c_{1}+a_{22}c_2}=b_{2}=a_{21}d_{1}+a_{22}d_2}

=>

a_{11} \left( c_{1}-d_{1} \right) + a_{12} \left( c_{2}-d_{2} \right) =0
a_{21} \left( c_{1}-d_{1} \right) + a_{22} \left( c_{2}-d_{2} \right) =0

Therefore, the system is now homogenous. It should be easy enough to generalize to an nxn system of equations. Also, I could generalize it to x number of solutions by showing that any two of the potential solutions subtracted by each other would yield a homogenous system.

Suppose f_{1}=c_{1}-d_{1}, f_{2}=c_{2}-d_{2}

.:.

a_{11}f_{1}+a_{12}f_{2}=0
a_{21}f_{2}+a_{22}f_{2}=0

f_{1}=-\frac{a_{12}}{a_{11}}f_{2}
a_{21} \left( \frac{-a_{12}}{a_{11}} f_{2} \right) + a_{22}f_{2} = \left( a_{22} - \frac{a_{12}a_{21}}{a_{11}} \right) f_{2} = 0

Now, there are two possibilities:

1.) f_{2}=f_{1}=0 ergo c_{1}=d_{1} and c_{2}=d_{2}, which is a CONTRADICTION

2.) a_{11}a_{22} - a_{21}a_{12} = 0 ergo det(a)=0

Now, how should I prove that a homogeneous linear system of equations whose coefficient matrix has a determinant of zero has infinite solutions?

Furthermore, how should I go about generalizing the problem to an arbitrary number of equations, unknowns and solutions? It should be easy to show that the difference of any two of a given x solutions to an nxn linear system of equations results in a homogeneous system. However, I am unsure of how to state that symbolically. Furthermore, I am a little unclear on how I should generalize the proof that the determinant of the coefficient matrix must be zero.
 
Physics news on Phys.org
You're on the right track but it helps to look at the situation strictly from a linear algebra point of view:

The idea is that if your coefficient matrix is A, then A if Ax=b has two solutions for x, A must have a non-trivial kernel (Ay=0 is solved by vectors other than 0). This is pretty much what you did here, but can be seen more easily by noting that if x and w are solutions

Ax=b
Aw=b
0=b-b=Ax-Aw=A(x-w)

and x-w is not zero

And now the question is why are there infinite vectors y that make Ay=0 if there is one non-zero one?
 
Elzair said:
f_{1}=-\frac{a_{12}}{a_{11}}f_{2} (1)
a_{21} \left( \frac{-a_{12}}{a_{11}} f_{2} \right) + a_{22}f_{2} = \left( a_{22} - \frac{a_{12}a_{21}}{a_{11}} \right) f_{2} = 0 (2)

[...]

2.) a_{11}a_{22} - a_{21}a_{12} = 0 ergo det(a)=0

Now, how should I prove that a homogeneous linear system of equations whose coefficient matrix has a determinant of zero has infinite solutions?

I don't have time to respond to all of this now, so I will leave that to someone else or see if I can reply tomorrow.
But let me just help you out with this little bit, maybe it will inspire you to solve the rest.

Suppose that the determinant is indeed zero. Then equation (2) above is automatically satisfied (I'll leave the case where a11 = 0 to you).
But actually equation (2) is supposed to fix f2, so that is now free to be any number you like. Then from equation (1) there follows a corresponding value for f1. So the number of solutions is equal to the number of real numbers (to be more precise, the cardinality of the solution set equals that of R) which is (uncountably) infinite.

By the way, what is often done in linear algebra is to write the system of equations in matrix form. Let f = (f1, f2) be the solution vector, b = (b1, b2) the right-hand side, and A the matrix of coefficients. Then the equations can be written as Af = b. The solution is obviously given by getting rid of the matrix, f = A-1b, of course under the condition that A-1 exists (which relates to the determinant how? *wink*)
 
A more general, more abstract, proof is to show that the set of all solutions to Ax= 0 (for A a linear transformation or matrix) forms a subspace of the domain of A.

Specifically, if u and v both satisfy Au= 0 and Av= 0, then A(\alpha u+ \beta v)= \alpha Au+ \beta Av= 0 for all scalars \alpha and \beta.

It then follows that that subspace either
1) has dimension 0: is only the 0 vector
2) has dimension greater than 0, in which case it contains an infinite number of vectors.
 

Similar threads

  • · Replies 4 ·
Replies
4
Views
2K
  • · Replies 7 ·
Replies
7
Views
3K
  • · Replies 5 ·
Replies
5
Views
3K
Replies
6
Views
2K
  • · Replies 3 ·
Replies
3
Views
2K
  • · Replies 8 ·
Replies
8
Views
2K
  • · Replies 3 ·
Replies
3
Views
2K
  • · Replies 8 ·
Replies
8
Views
3K
  • · Replies 8 ·
Replies
8
Views
2K
  • · Replies 11 ·
Replies
11
Views
2K