Infinite solutions to homogeneous system?

Click For Summary
SUMMARY

The discussion clarifies that a homogeneous system of n linear equations with m unknowns, where m > n, results in infinitely many solutions. This is due to the presence of linearly dependent vectors, which lead to zero rows during Gaussian elimination. The theorem is supported by geometric interpretations, emphasizing that unique solutions arise only when m = n with independent vectors. The matrix representation Ax = b further illustrates that if A is not invertible, the system will yield infinite solutions when it is homogeneous.

PREREQUISITES
  • Understanding of linear algebra concepts, particularly homogeneous systems of equations.
  • Familiarity with Gaussian elimination and matrix row reduction techniques.
  • Knowledge of vector spaces and linear dependence.
  • Basic understanding of matrix operations, including determinants and invertibility.
NEXT STEPS
  • Study the implications of linear dependence in vector spaces.
  • Learn about the rank-nullity theorem in linear algebra.
  • Explore the geometric interpretations of linear equations in higher dimensions.
  • Investigate the properties of square and non-square matrices in relation to solutions of linear systems.
USEFUL FOR

Students and professionals in mathematics, particularly those studying linear algebra, as well as educators teaching concepts related to systems of equations and matrix theory.

hivesaeed4
Messages
217
Reaction score
0
Could someone explain the following theorem to me:

Given a homogeneous system of n linear equations in m unknowns if m>n (i.e. there are more unknowns than equations) there will be infinitely many solutions to the system.
 
Physics news on Phys.org
Hey hivesaeed4.

Think about when you have a unique solution (and the system is not inconsistent): for a unique solution you have as many elements in the vector as you have variables.

If m > n then as long as the system is not inconsistent, then many of the rows are going to disappear and become 0 vectors when you do row reduction (like Gaussian elimination) since some of those vectors will be linearly dependent on the other vectors.

But if m < n it means that as long as you have one non-zero vector it means that at most you will never be able to get a unique solution (since you need minimum m = n for this to happen).

So you will need to introduce a parameter and then relate those free parameters to the final solution.

Geometrically if you want to think of a solution of n n-dimensional plane equations it means that for a unique solution, all points have to intersect at the same point.

Think about the 3D case for the moment: if you have two planes intersect then it means that you either have the two planes being equal (which means the intersection gives the whole plane) or you get an intersection which gives a line as your solution.

If you add one more plane that is not linearly dependent on the other two then your line will reduce to a point and a point is a unique solution.

So until we get m = n independent vectors, we have situations where we can have many solutions instead of 1 and this is the visual explanation of why the statement you are wondering about actually holds.
 
We can, of course, always write a system of equations as a matrix problem of the form Ax= b. In particular, if A is a square matrix, mapping, say, Rn to Rn, there are three possibilities for the equation Ax= b.
1) the equation has a unique solution.
2) the equation has NO solution.
3) the equation has an infinite number of solutions.

If A is invertible, we can multiply both sides of Ax= b by its inverse to get A^{-1}Ax= x= A^{-1}b. That clearly is a solution to the equation and, because x must be equal to that, the solution is unique.

If A is not invertible, then it has rank, k, less than n. That means that A maps all of Rn into a subspace of Rn of dimension k< n. It also means that A has null space (space of vectors, v, such that Av= 0) of dimension n- k.

In that case, if b does not happen to lie in that subspace, the equation, Ax= b, has no solution. For any x, Ax is in that subspace and b is not. If b does happen to lie in that subspace, there is a solution, a vector x such that Ax= b, but it is also true that for any v in the null space of A, A(x+ v)= Ax+ Av= b+ 0= b so there are an infinite number of solutions.

Since that is true for square matrices, what does it have to do with this problem? Well, if we have m equations, in n unknowns, with m< n, we can represent it has the matrix equation Ax= b where A is Not square. It has m rows and n columns. But we can make it square by adding n- m rows of all zeros and adding n- m 0s to the vector b. Since that matrix obviously has determinant 0, it is not invertible, which means it has either no solution or an infinite number of solutions. Because the system is homogeneous, Ax= 0, it has the obvious solution, x= 0, which means "no solution" cannot apply. We are left with an infinite number of solutions.
 
I am studying the mathematical formalism behind non-commutative geometry approach to quantum gravity. I was reading about Hopf algebras and their Drinfeld twist with a specific example of the Moyal-Weyl twist defined as F=exp(-iλ/2θ^(μν)∂_μ⊗∂_ν) where λ is a constant parametar and θ antisymmetric constant tensor. {∂_μ} is the basis of the tangent vector space over the underlying spacetime Now, from my understanding the enveloping algebra which appears in the definition of the Hopf algebra...

Similar threads

  • · Replies 26 ·
Replies
26
Views
5K
  • · Replies 9 ·
Replies
9
Views
2K
  • · Replies 30 ·
2
Replies
30
Views
1K
  • · Replies 5 ·
Replies
5
Views
2K
  • · Replies 4 ·
Replies
4
Views
3K
  • · Replies 2 ·
Replies
2
Views
2K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 2 ·
Replies
2
Views
3K
  • · Replies 1 ·
Replies
1
Views
1K
  • · Replies 4 ·
Replies
4
Views
2K