Gauss elimination algorithm for matrix inversion

Click For Summary

Discussion Overview

The discussion revolves around the Gauss elimination algorithm for matrix inversion, specifically addressing concerns about handling diagonal elements that may equal zero, the implications of row swapping, and the conditions under which a matrix is non-invertible. The context includes theoretical considerations and practical implementation in programming.

Discussion Character

  • Technical explanation
  • Conceptual clarification
  • Debate/contested
  • Mathematical reasoning

Main Points Raised

  • One participant expresses concern about the implications of a zero diagonal element during matrix inversion and questions the validity of row swapping in this context.
  • Another participant suggests that swapping rows is a legitimate operation when applying row operations to achieve the identity matrix, using a simple example to illustrate this point.
  • A later reply indicates that the primary goal is to achieve the identity matrix on one side, regardless of the number of swaps performed.
  • One participant introduces the concept of "pivoting" in computational methods and mentions existing libraries that handle such operations, suggesting that explicit swaps may not be necessary in all cases.
  • Another participant questions how to detect non-invertibility during Gaussian elimination, proposing that a row of zeros indicates linear dependence and thus non-invertibility.
  • It is noted that if any diagonal entries are zero after Gaussian elimination, the matrix is not invertible.

Areas of Agreement / Disagreement

Participants express differing views on the necessity of row swapping during matrix inversion, with some asserting it is essential when encountering zero diagonal elements, while others argue that it may not always be required. The discussion remains unresolved regarding the best approach to handle these scenarios.

Contextual Notes

Participants discuss the implications of row operations and the conditions for matrix invertibility without reaching a consensus on the necessity of pivoting or the handling of zero diagonal elements. There is also a lack of agreement on the detection of non-invertibility during the elimination process.

Who May Find This Useful

This discussion may be useful for individuals interested in numerical methods for matrix operations, particularly those implementing algorithms in programming or studying linear algebra concepts related to matrix inversion.

TheDestroyer
Messages
401
Reaction score
1
Hello guys,

I'm writing a C++ function for matrix inversion. I saw many algorithms online, but I'm concerned about the case when a diagonal element equals zero, which is, for some reason, not taken into account by those websites.

When we do the elimination for solving a system of linear equations, it's fine to exchange 2 lines, since the columns represent the variables for which the equations are solved.

But in the case of inverting the matrix to multiply it with another column matrix, which is a vector whose rows have elements that comply to the columns in that matrix I want to invert (I need this for Levenberg-Marquardt algorithm). How can I solve this problem? can I simply do the reciprocal exchange after the inverting is done (e.g., start exchange 1 with 2, and at the end redo the exchange to restore it the way it was)? could someone please explain the theoretical scheme of the problem? (I mean when the matrix is invertible, what kind of conditions will I get in the matrix elements).

Thank you for any efforts :)
 
Physics news on Phys.org
I'm not sure I see a problem. It sounds like you are using the method of "row-reduce A to the identity matrix while simultaneously applying those row operations to the identity matrix to get A^{-1}". If that is what you are doing swapping two rows is a perfectly legitimate operation.

To take a very simple example, consider inverting
\begin{bmatrix}0 & 1 \\ 1 & 0 \end{bmatrix}
which can be done by a single "swap":
\begin{bmatrix}0 & 1 \\ 1 & 0 \end{bmatrix}\begin{bmatrix}1 & 0 \\ 0 & 1\end{bmatrix}

Swap the two rows to get
\begin{bmatrix}1 & 0 \\ 0 & 1\end{bmatrix}\begin{bmatrix} 0 & 1 \\ 1 & 0\end{bmatrix}

And, indeed, this matrix is its own inverse.
 
Ah, I see! so you're saying swapping or not is totally irrelevant! the most important thing is achieving the identity matrix on the other side, even if I do million swaps and or any other row elementary operations.

I think I get it! thank you, was a stupid question anyway due to my confusion :-)
 

The buzzword for "swapping rows and/or columns" in computational methods is "pivoting". You should be able to find equation solving algorithms that use pivoting on the web. Look at the LINPACK or LAPACK libraries for example.

For some types of problem, the underlying maths guarantees than pivoting is never required, so equation solvers that don't do pivoting are useful in practice.

In a computer algorithm, you don't necessarily have to explicitly swap the rows and columns around. All you need to do is remember the order in which you did the row operations on the matrix. That can often be done with just a list of integers representing the "logical" ordering of the matrix compared with its "physical" ordering in the computer's memory.
 
Thank you for your idea AlephZero.

How could it be possible that swapping rows isn't necessary? what if the one of the diagonal elements equals zero? this simply means we have to swap the rows with some other row that makes this diagonal component a non-zero value, right?

I have a question actually:

how can I detect that the matrix is non-invertible during the gaussian eliminations? what's going to happen during the elementary row operations? am I going to get a column of zeros? or what exactly?
 
TheDestroyer said:
Thank you for your idea AlephZero.

How could it be possible that swapping rows isn't necessary? what if the one of the diagonal elements equals zero? this simply means we have to swap the rows with some other row that makes this diagonal component a non-zero value, right?

I have a question actually:

how can I detect that the matrix is non-invertible during the gaussian eliminations? what's going to happen during the elementary row operations? am I going to get a column of zeros? or what exactly?

If you get a row with only zeros it's non-invertible, because that shows that the rows are linearly dependent, so it will reduce to a matrix with linearly dependent rows, so it can't reduce to the identity matrix.
 
You will always be ending up with a diagonal matrix after gaussian elimination. If any of the diagonal entries are 0, the matrix is not invertible.
 
Thank you so much :-)
 

Similar threads

Replies
5
Views
3K
  • · Replies 5 ·
Replies
5
Views
3K
  • · Replies 5 ·
Replies
5
Views
3K
  • · Replies 3 ·
Replies
3
Views
2K
  • · Replies 5 ·
Replies
5
Views
2K
  • · Replies 12 ·
Replies
12
Views
2K
  • · Replies 5 ·
Replies
5
Views
3K
  • · Replies 1 ·
Replies
1
Views
1K
  • · Replies 3 ·
Replies
3
Views
2K
  • · Replies 1 ·
Replies
1
Views
2K