1. Not finding help here? Sign up for a free 30min tutor trial with Chegg Tutors
    Dismiss Notice
Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

Inverse of a matrix

  1. May 8, 2009 #1
    1. The problem statement, all variables and given/known data

    Is there an iterative method to calculate an inverse of a large matrix faster than a direct method (like Gaussian elimination, or inverse of LU, etc...) ?


    2. Relevant equations



    3. The attempt at a solution

    I tried the Gauss-Seidel iterative method to inverse a matrix, but unfortunately, it takes longer than a direct method.
     
  2. jcsd
  3. May 8, 2009 #2
    You can also try Newton-Raphson. Consider first computing the reciprocal of a real number. Given y we want to compute x = 1/y. This amounts to solving the equation:

    1/x - y = 0

    Newton-Raphson then gives:

    x_{n+1} = x_{n} - (1/x_{n} - y)/(-1/x_{n}^2) =

    x_{n} + x_{n}[1 - y x_{n}]

    This algorithm doesn't contain any divisions itself, so it is a true division algorithm. In case of matrices you can simply take y to be a matrix and start to iterate with some trial inverse x_{0}. Certain conditions have to be met for convergence. But it is clear that if the above algorithm converges, then that means that the second term tends to zero, so the term in the square brackets will then tend to zero, and thatmeans that x_{n} will apprach the inverse of y.
     
  4. May 8, 2009 #3
    Hello Count Iblis,

    I tried your solution, but it doesn't converge if I randomly choose the value of x_{0}.

    What are the conditions for convergence? And how to 'find' the proper x_{0}?

    Thanks.
     
  5. May 8, 2009 #4
  6. May 8, 2009 #5

    minger

    User Avatar
    Science Advisor

    There are many methods, as inverting matrices is the heart of any FEA solver. Looking through the ANSYS theory reference, aside from a direct solver there is a sparse direct solver which as you mentioned decomposes the matrix into an upper and lower triangular marix. The rest of mostly copy and pasted.

    The frontal (or wavefront) solution procedure is discussed by Irons(17) and Melosh and Bamford(25). The number of equations which are active after any element has been processed during the solution procedure is called the wavefront at that point. The method used places a wavefront restriction on the problem definition, which depends upon the amount of memory available for a given problem. Many thousand DOFs (degrees of freedom) on the wavefront can be handled in memory on some currently available computers. Wavefront limits tend to be restrictive only for the analysis of arbitrary 3-D solids. In the wavefront procedure, the sequence in which the elements are processed in the solver (the element “order”) is crucial to minimize the size of the wavefront.
    [tex] \sum^L_{j=1} K_{kj}u_j = F_k[/tex]
    To eliminate a typical equation, i=k, the equation is first normalized to:
    [tex] \sum^L_{j=1} \frac{K_{ij}}{K_{ii}}u_j = \frac{F_i}{K_{ii}}[/tex]
    Then rewrite
    [tex] \sum^L_{j=1} K_^*{ij}u_j = F^*_i[/tex]
    Where
    [tex]K^*_{ij} = \frac{K_{ij}}{K_{ii}}[/tex]
    [tex]F^*_{ij} = \frac{F_{i}}{K_{ii}}[/tex]
    Where Kii is known as the pivot term.

    There are also several iterative solvers. There are several conjugate gradient methods along with a few others. I would try and get some research on FEA solvers and you should find many ways to solve linear combinations of equation.
     
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook