# Gauss-jordon elimination

1. Feb 12, 2006

### asdf1

just wondering...
why does that method work for finding A^(-1)?
in other words, what's the principle behind that method?

2. Feb 12, 2006

### Hurkyl

Staff Emeritus
See if you can work out an invariant that is maintained by the algorithm. (e.g. an equation that is true at the start, and each step may change the values, but the equation remains true) That invariant should let you easily prove the result is the inverse.

(I suppose there are probably other ways of doing it too. This is just the one that strikes me as the simplest!)

3. Feb 12, 2006

### 0rthodontist

It's because every row operation can be viewed as the product of an elementary matrix on the left and the original matrix on the right. An elementary matrix is the identity matrix with 1 row operation performed on it, for example the elementary matrix
1 0 0
0 1 0
0 5 1
represents the operation R3 <-- R3 + 5*R2
and you can verify that when you left-multiply the above matrix by any 3x3 matrix, the result is the row operation R3 <--R3 + 5*R2

So when you row reduce A to the identity matrix I, it is equivalent to left-multiplying it by a sequence of elementary matrices E1...Ek as follows:
Ek*E(k-1)*...*E3*E2*E1*A = I
So the matrix Ek*E(k-1)*...*E3*E2*E1, which is the same as Ek*E(k-1)*...*E3*E2*E1*I, is the inverse of A. But Ek*E(k-1)*...*E3*E2*E1*I is just the transformations represented by E1...Ek being applied to the identity I in the same order, which is exactly the procedure to find the inverse.

Last edited: Feb 12, 2006
4. Feb 16, 2006

### asdf1

thank you very much!!!