Finding Inverse of Matrix by using Gaussian-Jordan Elimination

In summary, the algorithm works because each row operation corresponds to an "elementary matrix", the matrix we get by applying that row operation to the identity matrix.
  • #1
shuiki0306
2
0
Hello. Nice to meet you. I have just enrolled. :)
I knew how to solve and to find out inverse Matrix by using Gaussian elimination.
However, I was wondering why AI -> IA' is satisfactory.
In my university, I was just taught how to use but wasn't taught why it is satisfactory.
Thank you for answer.
(I am non-native, so my English is a little bad. Sorry giving challenge to you)
 
Physics news on Phys.org
  • #2
That works because each row operation corresponds to an "elementary matrix", the matrix we get by applying that row operation to the identity matrix. That is the row operation "add 3 times the second row to the first row" corresponds to the "elementary matrix"
[tex]\begin{pmatrix}1 & 3 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1\end{pmatrix}[/tex]

Multiplying that matrix by any matrix, A, will add three times the second row of A to its first row:
[tex]\begin{pmatrix}1 & 3 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1\end{pmatrix}\begin{pmatrix}a_{11} & a_{12} & a_{13} \\ a_{21} & a_{22} & a_{23} \\ a_{31} & a_{32} & a_{33}\end{pmatrix}= \begin{pmatrix}a_{11}+ 3a_{21} & a_{12}+ 3a_{22} & a_{13}+ 3a_{23} \\ a_{21} & a_{22} & a_{23} \\ a_{31} & a_{32} & a_{33}\end{pmatrix}[/tex]


So deciding what row operations will reduce A to the identity matrix means finding elementary matrices, e1, e2, e3, ..., en, each corresponding to a row operation, such that (en...e3e2e1)A= I. Of course, the product of those elementary matrices, en...e3e2d1, is the inverse matrix to A. Applying those same row operations to the identity matrix is just the same as multiplying them: en...e3e2e1I= en...e3e2e1= A^-1.
 
  • #3
ImageUploadedByPhysics Forums1409759840.771257.jpg

I was wondering why this is correct.
Actually, I solved problem in my exam (this style).
But, I still cannot understand why A I change to I A' with row operation.
 
  • #4
What part of my previous response did you not understand?
 
  • #5
A slightly longer but (I hope) more intuitive explanation for why the method works.

Do you understand why solving a system of equations with Gauss Jordan works? That is, do you understand why solving the systems
[tex]
\begin{align*}
5x + 7y = 1 & \\
8x - 3y = 0 &
\end{align*}
[/tex]

by reducing this augmented matrix

[tex]
\begin{bmatrix} 5 & 7 & 1 \\ 8 & -3 & 0 \end{bmatrix}
[/tex]

gives the solution?

If so: If you want to find the inverse of the matrix from that system you want another matrix that satisfies
[tex]
\begin{bmatrix} 5 & 7 \\ 8 & -3 \end{bmatrix} \begin{bmatrix} x & s \\ y & t \end{bmatrix} = \begin{bmatrix} 1 & 0 \\ 0 & 1 \end{bmatrix}
[/tex]

The solution to the system above gives the required elements x and y of the second matrix: the elements s and t would be the solutions to
this system

\begin{align*}
5s + 7t = 0 & \\
8s - 3t = 1 &
\end{align*}

and could be found by applying Gauss Jordan to this augmented matrix

[tex]
\begin{bmatrix} 5 & 7 & 0 \\ 8 & -3 & 1 \end{bmatrix}
[/tex]

Solve those two systems and you have the entries for the inverse of A. The method you ask about for finding inverses simply says this: instead of going through two bouts of row operations and augmented matrices, simply but both columns of the identity matrix in a single augmented matrix and do the row operations once - that is, set up
\begin{bmatrix} 5 & 7 & 1 & 0 \\ 8 & -3 & 0 & 1 \end{bmatrix}

and reduce it. The third and fourth columns of the result are the required entries for the inverse.
 
  • #6
Here is why that algorithm works. I will express it more generally in terms of solving a system of linear equations: A.x = B where B is a column vector or a matrix. Solution by Gaussian elimination consists of multiplying both sides on the left by some matrices, one for each step of elimination, pivoting, and backsubstitution.. Matrices T1, T2, ..., Tn:
Tn...T2.T1.A.x = Tn...T2.T1.B
Or for short,
T.A.x = T.B

When we have reached a solution, T.A = I, the identity matrix, making T = A-1. If B = I, then T.B = A-1, which is why this algorithm works.
 
  • #7
The matrices lpetrich is talking about are called, in some texts, "elementary" matrices. If you apply a "row operation" to the identity matrix, you get a matrix, R, that has the nice property that the multiplication, RA, gives the same result as applying that row operation to matrix A. For example, in three dimensions, applying the row operation "add twice the first row to the third row" to the identity matrix gives [tex]\begin{bmatrix} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 2 & 0 & 1\end{bmatrix}[/tex].

And multiplying that by matrix A gives
[tex]\begin{bmatrix} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 2 & 0 & 1\end{bmatrix}\begin{bmatrix}a_{11} & a_{12} & a_{13} \\ a_{21} & a_{22} & a_{23} \\ a_{31} & a_{32} & a_{33}\end{bmatrix}= \begin{bmatrix}a_{11} & a_{12} & a_{13} \\ a_{21} & a_{22} & a_{23} \\ 2a_{11}+ a_{31} & 2a_{12}+ a_{32} & 2a_{13}+ a_{33}\end{bmatrix}[/tex], exactly the same as adding twice the first row of A to its third row.

So if I can find a sequence of row operations that reduce matrix A to the identity matrix then I have found a sequence of elementary matrices (even if I never write them explicitly) whose product is the inverse matrix to A. Applying those operations to the identity matrix is the same as multiplying all of the corresponding elementary matrices so gives the inverse matrix to A. Applying those operations to a vector is the same as multiplying that vector by all those elementary matrices which is the same as [itex]A^{-1}v[/itex].
 

What is the process of finding the inverse of a matrix using Gaussian-Jordan Elimination?

The process of finding the inverse of a matrix using Gaussian-Jordan Elimination involves performing a series of elementary row operations on the original matrix to transform it into the identity matrix, while simultaneously performing the same operations on the identity matrix to obtain the inverse matrix.

Why is Gaussian-Jordan Elimination preferred for finding the inverse of a matrix?

Gaussian-Jordan Elimination is preferred for finding the inverse of a matrix because it is a systematic and efficient method that guarantees a unique solution for any invertible matrix. It also eliminates the need for computing determinants or cofactors, making it less prone to error.

Can any matrix be inverted using Gaussian-Jordan Elimination?

No, Gaussian-Jordan Elimination can only be used to find the inverse of square matrices that are invertible, meaning their determinant is non-zero. If the determinant is zero, then the matrix does not have an inverse.

What are the limitations of finding the inverse of a matrix using Gaussian-Jordan Elimination?

One limitation is that the process can be time-consuming and complex for larger matrices. Additionally, it may not be the best method for finding the inverse of matrices with decimal or irrational values, as it involves a lot of rounding and can introduce errors.

Are there any alternative methods for finding the inverse of a matrix?

Yes, there are other methods for finding the inverse of a matrix, such as using the adjugate matrix or the LU decomposition method. These methods may be more efficient for certain types of matrices, but they also have their own limitations and drawbacks.

Similar threads

Replies
34
Views
2K
  • Calculus and Beyond Homework Help
Replies
1
Views
645
  • Linear and Abstract Algebra
Replies
1
Views
1K
  • Linear and Abstract Algebra
Replies
2
Views
2K
  • Linear and Abstract Algebra
Replies
3
Views
1K
  • Linear and Abstract Algebra
Replies
4
Views
3K
  • Linear and Abstract Algebra
Replies
12
Views
2K
  • Linear and Abstract Algebra
Replies
7
Views
2K
Replies
8
Views
2K
  • Precalculus Mathematics Homework Help
Replies
5
Views
2K
Back
Top