What is the connection between row reduction and matrix inversion?

Click For Summary
Gaussian elimination, or row reduction, is a method used for matrix inversion by transforming a matrix A into the identity matrix I while simultaneously applying the same operations to the identity matrix. Each row operation corresponds to an elementary matrix, which, when multiplied by A, performs the same operation. The sequence of these elementary matrices, when multiplied together, results in the inverse of A. This process illustrates that the operations needed to achieve the identity matrix also yield the inverse matrix. Thus, Gaussian elimination effectively connects the concepts of row reduction and matrix inversion.
Wicketer
Messages
7
Reaction score
0
I'll start off with my question:

Why do we use Gaussian Elimination when inverting a matrix? (this is only one of the methods...which is the only one that doesn't make sense to me).

I know how to do it, but I'm not sure why it works. When solving a system of linear equations, I understand why Gaussian Elimination works: to me, it's just the adding and subtracting of equations until a desirable form is reached. But Gaussian Elimination doesn't seem to be as obvious a tool in matrix inversion.

Thanks!
 
Physics news on Phys.org
The Gauss method for finding the inverse B of a matrix A, AB = I, corresponds to solving n linear systems, one for each column of B.
 
If by "Gaussian elimination" you mean "row reduction", one way to think about it is this:
to every row operation, there exist an "elementary" matrix that you get by applying that row operation to the identity matrix.

That is, if we are working with 3 by 3 matrices and we apply the row operation "add 3 times the second row to the third row" to the identity matrix we get
\begin{bmatrix}1 & 0 & 0\\ 0 & 1 & 0 \\ 0 & 3 & 1\end{bmatrix}

and, further, multiplying that "elementary matrix" by any matrix does that row operation:
\begin{bmatrix}1 & 0 & 0\\ 0 & 1 & 0 \\ 0 & 3 & 1\end{bmatrix}\begin{bmatrix}a & b & c \\ d & e & f\\ g & h & j\end{bmatrix}= \begin{bmatrix}a & b & c \\ d & e & f \\ g+ 3d & h+ 3e & j+ 3f\end{bmatrix}.

So, suppose some set of row operations, R1,then R2, ..., Rn reduce the matrix A to the identity matrix I. Call the corresponding elementary matrices M1, M2, ..., Mn so that Mn... M2M1A= I (notice the order- applying R1 first means we have to multiply M1 first). That is the same as (Mn...M2M1)A= I which is exactly saying that Mn...M2M1 is A^{-1}. But now A^{-1}= Mn...M2M1= Mn...M2M1I and multiplying those matrices, in that order, by the indentity matrix is the same as applying the row operations R1, R2, ..., Rn, in that order, to the identity matrix.

That is why "Gaussian Elimination" (row reduction) works. You determine row operations that will reduce the matrix A to the identity matrix while applying the same row operations to the identity matrix. That is the same as determining the matrices M1, M2, etc. while performing the matrix multiplications Mn...M2M1I= A^{-1}.
 
Last edited by a moderator:
Thread 'How to define a vector field?'
Hello! In one book I saw that function ##V## of 3 variables ##V_x, V_y, V_z## (vector field in 3D) can be decomposed in a Taylor series without higher-order terms (partial derivative of second power and higher) at point ##(0,0,0)## such way: I think so: higher-order terms can be neglected because partial derivative of second power and higher are equal to 0. Is this true? And how to define vector field correctly for this case? (In the book I found nothing and my attempt was wrong...

Similar threads

  • · Replies 2 ·
Replies
2
Views
1K
  • · Replies 19 ·
Replies
19
Views
4K
  • · Replies 2 ·
Replies
2
Views
2K
Replies
8
Views
2K
Replies
2
Views
2K
  • · Replies 25 ·
Replies
25
Views
2K
Replies
1
Views
4K
  • · Replies 1 ·
Replies
1
Views
3K
Replies
2
Views
2K
  • · Replies 10 ·
Replies
10
Views
2K