- #1
SmilingDave
- 6
- 0
Cullen in a question seems to say this;
You have a matrix A of any size. Make a matrix B consisting of only the linearly independent columns of A, choosing them by going from left to right through the columns of A. Then make a matrix C such that BC=A [The q was to prove this is always possible. But I'm not asking about that]
He then says that the rows of C are exactly the rows of A in row reduced echelon form. I tried an example, and it works.
My q is, why is this so? I assume it's not just coincidence, so what is going on?
You have a matrix A of any size. Make a matrix B consisting of only the linearly independent columns of A, choosing them by going from left to right through the columns of A. Then make a matrix C such that BC=A [The q was to prove this is always possible. But I'm not asking about that]
He then says that the rows of C are exactly the rows of A in row reduced echelon form. I tried an example, and it works.
My q is, why is this so? I assume it's not just coincidence, so what is going on?