MHB Calculation of the inverse matrix - Number of operations

AI Thread Summary
The discussion centers on the calculation of the inverse of a regular n x n matrix using the Gauss algorithm and LU-decomposition. It is established that the inverse can be computed with n^3 + O(n^2) operations, where one operation is defined as a multiplication or division. The conversation highlights that LU-decomposition requires approximately 4/3 n^3 operations, while Gaussian elimination with back-substitution costs n^3 multiplications and n^3 + O(n^2) additions. Participants note the importance of considering both multiplication and addition operations in these calculations, as older references may overlook addition due to its relative speed. The overall conclusion emphasizes the efficiency of the Gauss algorithm in matrix inversion.
mathmari
Gold Member
MHB
Messages
4,984
Reaction score
7
Hey! :o

Let A be a regular ($n\times n$)-Matrix, for which the Gauss algorithm is possible.

If we choose as the right side $b$ the unit vectors $$e^{(1)}=(1, 0, \ldots , 0)^T, \ldots , e^{(n)}=(0, \ldots , 0, 1 )^T$$ and calculate the corresponding solutions $x^{(1)}, \ldots , x^{(n)}$ then the inverse matrix is $A^{-1}=[x^{(1)}, \ldots , x^{(n)}]$.

We can calculate the inverse with $n^3+O(n^2)$ operations. (1 operation = 1 multiplication or division)
If we calculate the solutions $x^{(1)}, \ldots , x^{(n)}$ with the using the LU-decomposition we get $\frac{4}{3}n^3+O(n^2)$ operations, or not?

It is because we apply the the Gauss algorithm which requires $\frac{1}{3}n^3+O(n^2)$ operations, right?

How do we get $n^3+O(n^2)$ ?

Do we have to use an other algorithm here?
 
Mathematics news on Phys.org
mathmari said:
Hey! :o

Let A be a regular ($n\times n$)-Matrix, for which the Gauss algorithm is possible.

If we choose as the right side $b$ the unit vectors $$e^{(1)}=(1, 0, \ldots , 0)^T, \ldots , e^{(n)}=(0, \ldots , 0, 1 )^T$$ and calculate the corresponding solutions $x^{(1)}, \ldots , x^{(n)}$ then the inverse matrix is $A^{-1}=[x^{(1)}, \ldots , x^{(n)}]$.

We can calculate the inverse with $n^3+O(n^2)$ operations. (1 operation = 1 multiplication or division)
If we calculate the solutions $x^{(1)}, \ldots , x^{(n)}$ with the using the LU-decomposition we get $\frac{4}{3}n^3+O(n^2)$ operations, or not?

Hey mathmari! (Smile)

LU-decomposition is listed here as $\frac 23 n^3 +O(n^2)$, while QR-decomposition with Householder reflections (for numerical stability) is $\frac 43n^3+O(n^2)$. (Nerd)

mathmari said:
It is because we apply the the Gauss algorithm which requires $\frac{1}{3}n^3+O(n^2)$ operations, right?

How do we get $n^3+O(n^2)$ ?

Do we have to use an other algorithm here?

That's indeed to get the matrix in row echelon form.
Afterwards we still need to solve it for each of the n unit vectors, which takes $\frac 12 n^3 + O(n^2)$ extra if I'm no mistaken. (Thinking)
 
mathmari said:
We can calculate the inverse with $n^3+O(n^2)$ operations. (1 operation = 1 multiplication or division)

When comparing operation counts for different methods and from different references, it is perhaps useful (but maybe already known to all participating, in which case I apologize for stating the obvious) that older references sometimes neglect addition (which includes subtraction) because multiplication (which includes division) used to be the determining factor, as it was much slower.

I learned that inversion using Gaussian elimination with back-substitution costs $n^3$ multiplications (exactly) and $n^3 + O(n^2)$ additions. Interestingly, for Gauss-Jordan the count is precisely the same.

(Elimination with back-substitution for one system costs $\frac{n^3}{2} + O(n^2)$ multiplications and $\frac{n^3}{2} + O(n)$ (no typo) additions.)
 
Suppose ,instead of the usual x,y coordinate system with an I basis vector along the x -axis and a corresponding j basis vector along the y-axis we instead have a different pair of basis vectors ,call them e and f along their respective axes. I have seen that this is an important subject in maths My question is what physical applications does such a model apply to? I am asking here because I have devoted quite a lot of time in the past to understanding convectors and the dual...
Fermat's Last Theorem has long been one of the most famous mathematical problems, and is now one of the most famous theorems. It simply states that the equation $$ a^n+b^n=c^n $$ has no solutions with positive integers if ##n>2.## It was named after Pierre de Fermat (1607-1665). The problem itself stems from the book Arithmetica by Diophantus of Alexandria. It gained popularity because Fermat noted in his copy "Cubum autem in duos cubos, aut quadratoquadratum in duos quadratoquadratos, et...
Insights auto threads is broken atm, so I'm manually creating these for new Insight articles. In Dirac’s Principles of Quantum Mechanics published in 1930 he introduced a “convenient notation” he referred to as a “delta function” which he treated as a continuum analog to the discrete Kronecker delta. The Kronecker delta is simply the indexed components of the identity operator in matrix algebra Source: https://www.physicsforums.com/insights/what-exactly-is-diracs-delta-function/ by...
Back
Top