Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

Finding Inverse of Matrix by using Gaussian-Jordan Elimination

  1. Sep 3, 2014 #1
    Hello. Nice to meet you. I have just enrolled. :)
    I knew how to solve and to find out inverse Matrix by using Gaussian elimination.
    However, I was wondering why AI -> IA' is satisfactory.
    In my university, I was just taught how to use but wasn't taught why it is satisfactory.
    Thank you for answer.
    (I am non-native, so my English is a little bad. Sorry giving challenge to you)
     
  2. jcsd
  3. Sep 3, 2014 #2

    HallsofIvy

    User Avatar
    Staff Emeritus
    Science Advisor

    That works because each row operation corresponds to an "elementary matrix", the matrix we get by applying that row operation to the identity matrix. That is the row operation "add 3 times the second row to the first row" corresponds to the "elementary matrix"
    [tex]\begin{pmatrix}1 & 3 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1\end{pmatrix}[/tex]

    Multiplying that matrix by any matrix, A, will add three times the second row of A to its first row:
    [tex]\begin{pmatrix}1 & 3 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1\end{pmatrix}\begin{pmatrix}a_{11} & a_{12} & a_{13} \\ a_{21} & a_{22} & a_{23} \\ a_{31} & a_{32} & a_{33}\end{pmatrix}= \begin{pmatrix}a_{11}+ 3a_{21} & a_{12}+ 3a_{22} & a_{13}+ 3a_{23} \\ a_{21} & a_{22} & a_{23} \\ a_{31} & a_{32} & a_{33}\end{pmatrix}[/tex]


    So deciding what row operations will reduce A to the identity matrix means finding elementary matrices, e1, e2, e3, ..., en, each corresponding to a row operation, such that (en...e3e2e1)A= I. Of course, the product of those elementary matrices, en...e3e2d1, is the inverse matrix to A. Applying those same row operations to the identity matrix is just the same as multiplying them: en...e3e2e1I= en...e3e2e1= A^-1.
     
  4. Sep 3, 2014 #3
    ImageUploadedByPhysics Forums1409759840.771257.jpg
    I was wondering why this is correct.
    Actually, I solved problem in my exam (this style).
    But, I still cannot understand why A I change to I A' with row operation.
     
  5. Sep 23, 2014 #4

    HallsofIvy

    User Avatar
    Staff Emeritus
    Science Advisor

    What part of my previous response did you not understand?
     
  6. Sep 24, 2014 #5

    statdad

    User Avatar
    Homework Helper

    A slightly longer but (I hope) more intuitive explanation for why the method works.

    Do you understand why solving a system of equations with Gauss Jordan works? That is, do you understand why solving the systems
    [tex]
    \begin{align*}
    5x + 7y = 1 & \\
    8x - 3y = 0 &
    \end{align*}
    [/tex]

    by reducing this augmented matrix

    [tex]
    \begin{bmatrix} 5 & 7 & 1 \\ 8 & -3 & 0 \end{bmatrix}
    [/tex]

    gives the solution?

    If so: If you want to find the inverse of the matrix from that system you want another matrix that satisfies
    [tex]
    \begin{bmatrix} 5 & 7 \\ 8 & -3 \end{bmatrix} \begin{bmatrix} x & s \\ y & t \end{bmatrix} = \begin{bmatrix} 1 & 0 \\ 0 & 1 \end{bmatrix}
    [/tex]

    The solution to the system above gives the required elements x and y of the second matrix: the elements s and t would be the solutions to
    this system

    \begin{align*}
    5s + 7t = 0 & \\
    8s - 3t = 1 &
    \end{align*}

    and could be found by applying Gauss Jordan to this augmented matrix

    [tex]
    \begin{bmatrix} 5 & 7 & 0 \\ 8 & -3 & 1 \end{bmatrix}
    [/tex]

    Solve those two systems and you have the entries for the inverse of A. The method you ask about for finding inverses simply says this: instead of going through two bouts of row operations and augmented matrices, simply but both columns of the identity matrix in a single augmented matrix and do the row operations once - that is, set up
    \begin{bmatrix} 5 & 7 & 1 & 0 \\ 8 & -3 & 0 & 1 \end{bmatrix}

    and reduce it. The third and fourth columns of the result are the required entries for the inverse.
     
  7. Sep 27, 2014 #6
    Here is why that algorithm works. I will express it more generally in terms of solving a system of linear equations: A.x = B where B is a column vector or a matrix. Solution by Gaussian elimination consists of multiplying both sides on the left by some matrices, one for each step of elimination, pivoting, and backsubstitution.. Matrices T1, T2, ..., Tn:
    Tn...T2.T1.A.x = Tn...T2.T1.B
    Or for short,
    T.A.x = T.B

    When we have reached a solution, T.A = I, the identity matrix, making T = A-1. If B = I, then T.B = A-1, which is why this algorithm works.
     
  8. Oct 7, 2014 #7

    HallsofIvy

    User Avatar
    Staff Emeritus
    Science Advisor

    The matrices lpetrich is talking about are called, in some texts, "elementary" matrices. If you apply a "row operation" to the identity matrix, you get a matrix, R, that has the nice property that the multiplication, RA, gives the same result as applying that row operation to matrix A. For example, in three dimensions, applying the row operation "add twice the first row to the third row" to the identity matrix gives [tex]\begin{bmatrix} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 2 & 0 & 1\end{bmatrix}[/tex].

    And multiplying that by matrix A gives
    [tex]\begin{bmatrix} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 2 & 0 & 1\end{bmatrix}\begin{bmatrix}a_{11} & a_{12} & a_{13} \\ a_{21} & a_{22} & a_{23} \\ a_{31} & a_{32} & a_{33}\end{bmatrix}= \begin{bmatrix}a_{11} & a_{12} & a_{13} \\ a_{21} & a_{22} & a_{23} \\ 2a_{11}+ a_{31} & 2a_{12}+ a_{32} & 2a_{13}+ a_{33}\end{bmatrix}[/tex], exactly the same as adding twice the first row of A to its third row.

    So if I can find a sequence of row operations that reduce matrix A to the identity matrix then I have found a sequence of elementary matrices (even if I never write them explicitly) whose product is the inverse matrix to A. Applying those operations to the identity matrix is the same as multiplying all of the corresponding elementary matrices so gives the inverse matrix to A. Applying those operations to a vector is the same as multiplying that vector by all those elementary matrices which is the same as [itex]A^{-1}v[/itex].
     
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook




Similar Discussions: Finding Inverse of Matrix by using Gaussian-Jordan Elimination
  1. Gaussian Elimination (Replies: 3)

  2. Gaussian Elimination (Replies: 4)

Loading...