Why Ax=0 only has trivial solution

  • Thread starter Thread starter georg gill
  • Start date Start date
Click For Summary
Ax=0 has only the trivial solution if matrix A is row equivalent to the identity matrix I, indicating that A is invertible. A matrix is invertible if its column vectors are linearly independent, which means the equation a_1v_1 + ... + a_nv_n = 0 has only the trivial solution where all coefficients are zero. Row reduction of A to I demonstrates that the rows are independent, which also implies the columns must be independent due to the equivalence of row and column rank. The process of reducing (A|I_n) to I confirms that if A can be transformed into I, its columns are linearly independent. This relationship between linear independence and invertibility is fundamental in linear algebra.
georg gill
Messages
151
Reaction score
6
Ax=0 has only trivial solution if A is row equivalent to I. Here in theorem 6 they explain it by referring to another theorem 4 in my book:

Theorem 6
http://bildr.no/view/1032481

Theorem 4:

http://bildr.no/view/1032482

Why is it so that if A is invertible Ax=0 only has trivial solution for x.
 
Physics news on Phys.org
This is true because a matrix is invertible if and only if its column vectors are all linearly independent (If the columns are linearly dependent on the other hand, the matrix will have determinant zero and thus not be invertible)
i.e. if A has column vectors v_1,...,v_n, then those vectors are linearly independent if and only if the equation a_1v_1 +...+a_nv_n\cdot =0 has only the trivial solution a_1=...=a_n=0 This is equivalent to saying Ax=0 has only the trivial solution. To see this, write the linear independence expression in matrix notation.
 
I get it when I say that the column vectors are linearly independent (as you point out) but when one reduce a matrix to its identity matrix one will get I if the rows are independent. How does this show that also the columns are independent?
 
Well do you remember how you find the inverse of a matrix? you reduce it alongside the identity, i.e. reduce (A|I_n), If you can arrive at the identity via row reduction, then A is invertible. And a matrix is invertible if and only if the columns are independent. The two statements go hand in hand, each implies the other. Alternatively, you know that the Identity matrix has linearly independent column vectors. It follows that if you can reduce a matrix by performing elementary row operations into the identity, then the original column vectors must be linearly independent.
 
Yes,correct
 
I am studying the mathematical formalism behind non-commutative geometry approach to quantum gravity. I was reading about Hopf algebras and their Drinfeld twist with a specific example of the Moyal-Weyl twist defined as F=exp(-iλ/2θ^(μν)∂_μ⊗∂_ν) where λ is a constant parametar and θ antisymmetric constant tensor. {∂_μ} is the basis of the tangent vector space over the underlying spacetime Now, from my understanding the enveloping algebra which appears in the definition of the Hopf algebra...

Similar threads

  • · Replies 9 ·
Replies
9
Views
5K
  • · Replies 2 ·
Replies
2
Views
1K
  • · Replies 5 ·
Replies
5
Views
2K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 7 ·
Replies
7
Views
3K
  • · Replies 21 ·
Replies
21
Views
1K
Replies
3
Views
2K
  • · Replies 15 ·
Replies
15
Views
2K
  • · Replies 7 ·
Replies
7
Views
1K
  • · Replies 14 ·
Replies
14
Views
3K