Null Space of a Matrix and Its Iterates

Click For Summary
The null space of a matrix remains unchanged through Gaussian elimination transformations, meaning N(A) equals N(A_m), where A_m is the transformed matrix. This is validated by showing that if a vector x is in the null space of A, it remains in the null space after applying the elimination matrices, as these operations do not alter the solutions of the corresponding linear system. Each elementary matrix involved in the transformation is invertible, ensuring that the kernel remains consistent. The discussion confirms that this property holds for other matrix decompositions, such as QR via Householder transformations, and applies to any iterative transformation. Thus, the null space is preserved across various matrix operations.
muzak
Messages
42
Reaction score
0
This might seem like a stupid question but would the null space of a matrix and its, say Gaussian elimination transforms, have the same null space. I guess, I am asking if this is valid:

Let x be in N(A). Let A_{m} be some iteration of A through elimination matrices, i.e. A_{m} = E_{1}E_{2}...E_{m}A. Is N(A) = N(A_{m})?

Seems like an obvious answer with a sort of obvious proof involving expanding A_{m} to the elimination matrices multiplied by the original A and showing that since x is in A's nullspace, you just have elimination matrices being multiplied by the zero vector. Is this correct? And if it is, can it apply to other decompositions such as QR via Householder? Can it apply to any arbitrary iterate, 1st, 2nd, etc.?

Thanks for any input.
 
Physics news on Phys.org
The null-space is the kernel of a matrix if I'm not mistaken, and yes, the original matrix and the transformed matrix have the same kernel. As for the proof, it's fairly simple, you just let the original matrix represent a system of linear equations, and then prove that any of the three elementary transformations don't change the solution of the system, which also means the kernel remains the same.
 
Each of the E_i is an "elementary matrix" corresponding to some row operation. That is, it is the matrix we get by applying that row operation to the identity matrix. Every row operation has an inverse operation and so every elementary matrix is invertible.
(There are three kinds of row operations:
1) Multiply a row by some non-zero number, a. The inverse is to multiply that same row by 1/a.
2) Swap two rows. The inverse is the same- swap those same two rows.
3) Add a multiple, a, of row i to row j. the inverse is to add -a times row ix to row j.)

If Ax= 0, then, of course, E_1E_2...E_nAx= 0. And, because the E_i matrices are invertible, if Ax is NOT 0, then neither is E_1E_2...E_nAx= 0.
 
Thread 'How to define a vector field?'
Hello! In one book I saw that function ##V## of 3 variables ##V_x, V_y, V_z## (vector field in 3D) can be decomposed in a Taylor series without higher-order terms (partial derivative of second power and higher) at point ##(0,0,0)## such way: I think so: higher-order terms can be neglected because partial derivative of second power and higher are equal to 0. Is this true? And how to define vector field correctly for this case? (In the book I found nothing and my attempt was wrong...

Similar threads

  • · Replies 8 ·
Replies
8
Views
3K
  • · Replies 14 ·
Replies
14
Views
3K
  • · Replies 3 ·
Replies
3
Views
5K
  • · Replies 5 ·
Replies
5
Views
2K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 1 ·
Replies
1
Views
6K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 6 ·
Replies
6
Views
2K
  • · Replies 1 ·
Replies
1
Views
3K
  • · Replies 1 ·
Replies
1
Views
2K