Gauss elimination algorithm for matrix inversion

Click For Summary
The discussion focuses on the Gauss elimination algorithm for matrix inversion, particularly addressing concerns about handling zero diagonal elements. It emphasizes that swapping rows is a valid operation during the elimination process, as the goal is to achieve the identity matrix while applying the same operations to the identity matrix to obtain the inverse. The concept of "pivoting" is introduced, highlighting its importance in computational methods for ensuring matrix invertibility. Participants clarify that a matrix is non-invertible if, during Gaussian elimination, a row of zeros is encountered, indicating linear dependence among the rows. Overall, the conversation provides insights into practical approaches for matrix inversion and the conditions for invertibility.
TheDestroyer
Messages
401
Reaction score
1
Hello guys,

I'm writing a C++ function for matrix inversion. I saw many algorithms online, but I'm concerned about the case when a diagonal element equals zero, which is, for some reason, not taken into account by those websites.

When we do the elimination for solving a system of linear equations, it's fine to exchange 2 lines, since the columns represent the variables for which the equations are solved.

But in the case of inverting the matrix to multiply it with another column matrix, which is a vector whose rows have elements that comply to the columns in that matrix I want to invert (I need this for Levenberg-Marquardt algorithm). How can I solve this problem? can I simply do the reciprocal exchange after the inverting is done (e.g., start exchange 1 with 2, and at the end redo the exchange to restore it the way it was)? could someone please explain the theoretical scheme of the problem? (I mean when the matrix is invertible, what kind of conditions will I get in the matrix elements).

Thank you for any efforts :)
 
Mathematics news on Phys.org
I'm not sure I see a problem. It sounds like you are using the method of "row-reduce A to the identity matrix while simultaneously applying those row operations to the identity matrix to get A^{-1}". If that is what you are doing swapping two rows is a perfectly legitimate operation.

To take a very simple example, consider inverting
\begin{bmatrix}0 & 1 \\ 1 & 0 \end{bmatrix}
which can be done by a single "swap":
\begin{bmatrix}0 & 1 \\ 1 & 0 \end{bmatrix}\begin{bmatrix}1 & 0 \\ 0 & 1\end{bmatrix}

Swap the two rows to get
\begin{bmatrix}1 & 0 \\ 0 & 1\end{bmatrix}\begin{bmatrix} 0 & 1 \\ 1 & 0\end{bmatrix}

And, indeed, this matrix is its own inverse.
 
Ah, I see! so you're saying swapping or not is totally irrelevant! the most important thing is achieving the identity matrix on the other side, even if I do million swaps and or any other row elementary operations.

I think I get it! thank you, was a stupid question anyway due to my confusion :-)
 

The buzzword for "swapping rows and/or columns" in computational methods is "pivoting". You should be able to find equation solving algorithms that use pivoting on the web. Look at the LINPACK or LAPACK libraries for example.

For some types of problem, the underlying maths guarantees than pivoting is never required, so equation solvers that don't do pivoting are useful in practice.

In a computer algorithm, you don't necessarily have to explicitly swap the rows and columns around. All you need to do is remember the order in which you did the row operations on the matrix. That can often be done with just a list of integers representing the "logical" ordering of the matrix compared with its "physical" ordering in the computer's memory.
 
Thank you for your idea AlephZero.

How could it be possible that swapping rows isn't necessary? what if the one of the diagonal elements equals zero? this simply means we have to swap the rows with some other row that makes this diagonal component a non-zero value, right?

I have a question actually:

how can I detect that the matrix is non-invertible during the gaussian eliminations? what's going to happen during the elementary row operations? am I going to get a column of zeros? or what exactly?
 
TheDestroyer said:
Thank you for your idea AlephZero.

How could it be possible that swapping rows isn't necessary? what if the one of the diagonal elements equals zero? this simply means we have to swap the rows with some other row that makes this diagonal component a non-zero value, right?

I have a question actually:

how can I detect that the matrix is non-invertible during the gaussian eliminations? what's going to happen during the elementary row operations? am I going to get a column of zeros? or what exactly?

If you get a row with only zeros it's non-invertible, because that shows that the rows are linearly dependant, so it will reduce to a matrix with linearly dependant rows, so it can't reduce to the identity matrix.
 
You will always be ending up with a diagonal matrix after gaussian elimination. If any of the diagonal entries are 0, the matrix is not invertible.
 
Thank you so much :-)
 

Similar threads

  • · Replies 5 ·
Replies
5
Views
2K
  • · Replies 1 ·
Replies
1
Views
1K
  • · Replies 2 ·
Replies
2
Views
2K
  • · Replies 2 ·
Replies
2
Views
3K
Replies
1
Views
4K
  • · Replies 16 ·
Replies
16
Views
4K
  • · Replies 13 ·
Replies
13
Views
3K
  • · Replies 1 ·
Replies
1
Views
1K
  • · Replies 1 ·
Replies
1
Views
5K
  • · Replies 6 ·
Replies
6
Views
4K