Understanding the Principle of Gauss-Jordan Elimination for Finding Inverses

  • Thread starter Thread starter asdf1
  • Start date Start date
  • Tags Tags
    Elimination
Click For Summary

Homework Help Overview

The discussion revolves around the principle of Gauss-Jordan elimination as a method for finding the inverse of a matrix. Participants are exploring the underlying concepts and reasoning behind this algorithm.

Discussion Character

  • Exploratory, Conceptual clarification, Mathematical reasoning

Approaches and Questions Raised

  • The original poster questions the principle behind the Gauss-Jordan elimination method for finding inverses. Some participants suggest working out an invariant maintained by the algorithm to understand its validity. Another participant explains the relationship between row operations and elementary matrices, detailing how these operations lead to the identity matrix.

Discussion Status

The discussion is active, with participants providing insights into the method and its theoretical foundations. There is an exploration of different perspectives on the algorithm, and while some guidance has been offered regarding the use of elementary matrices, there is no explicit consensus on the best approach yet.

Contextual Notes

Participants are discussing the method within the constraints of understanding linear algebra concepts, particularly in the context of homework help. The original poster's inquiry reflects a desire to grasp the theoretical basis rather than just the procedural steps.

asdf1
Messages
734
Reaction score
0
just wondering...
why does that method work for finding A^(-1)?
in other words, what's the principle behind that method?
 
Physics news on Phys.org
See if you can work out an invariant that is maintained by the algorithm. (e.g. an equation that is true at the start, and each step may change the values, but the equation remains true) That invariant should let you easily prove the result is the inverse.


(I suppose there are probably other ways of doing it too. This is just the one that strikes me as the simplest!)
 
It's because every row operation can be viewed as the product of an elementary matrix on the left and the original matrix on the right. An elementary matrix is the identity matrix with 1 row operation performed on it, for example the elementary matrix
1 0 0
0 1 0
0 5 1
represents the operation R3 <-- R3 + 5*R2
and you can verify that when you left-multiply the above matrix by any 3x3 matrix, the result is the row operation R3 <--R3 + 5*R2

So when you row reduce A to the identity matrix I, it is equivalent to left-multiplying it by a sequence of elementary matrices E1...Ek as follows:
Ek*E(k-1)*...*E3*E2*E1*A = I
So the matrix Ek*E(k-1)*...*E3*E2*E1, which is the same as Ek*E(k-1)*...*E3*E2*E1*I, is the inverse of A. But Ek*E(k-1)*...*E3*E2*E1*I is just the transformations represented by E1...Ek being applied to the identity I in the same order, which is exactly the procedure to find the inverse.
 
Last edited:
thank you very much!
 

Similar threads

  • · Replies 9 ·
Replies
9
Views
1K
  • · Replies 1 ·
Replies
1
Views
1K
  • · Replies 7 ·
Replies
7
Views
2K
  • · Replies 34 ·
2
Replies
34
Views
3K
  • · Replies 2 ·
Replies
2
Views
2K
  • · Replies 4 ·
Replies
4
Views
3K
Replies
9
Views
16K
  • · Replies 4 ·
Replies
4
Views
2K
  • · Replies 2 ·
Replies
2
Views
3K
  • · Replies 2 ·
Replies
2
Views
2K