Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

Negative diagonal elements in Incomplete Cholesky Decompostion

  1. Mar 15, 2012 #1
    Dear all,

    I'm facing a problem in performing incomplete Cholesky decomposition of sparse matrices. I use the algorithm which avoids talking square roots, given in here: http://en.wikipedia.org/wiki/Cholesky_decomposition and apply it to sparse matrices by just skipping the zeros ( no filling). My matrices are certainly symmetric and positive definite but the algorithm sometimes give negative diagonal elements. I have two questions regarding this:

    1. Is this the consequence of ignoring zero elements in the sparse matrices or is due to error accumulation?

    2. In order to go around the problem, I multiply the diagonal values by a number larger than 1.0, before decomposition. For small matrices, 1.05 may be enough but as the matrix sizes grows, the factor is not large enough. However with factor 1.2, the algorithm has never given negative diagonal values even for matrices of 1000,000 by 1000,000. Does anyone have any explanation as why the number helps and what is the smallest number which assure all positive diagonal elements?

    Your help would be appreciated.
     
  2. jcsd
  3. Mar 15, 2012 #2

    AlephZero

    User Avatar
    Science Advisor
    Homework Helper

    By "incomplete decomposition" do you mean ##LDL^T## decomposition?

    Changing the diagonal terms of the matrix to "make it work" doesn't make any sense to me. You can easily check if your code is correct by multiplying the ##LDL^T## matrices numerically and comparing with the original matrix.

    If your matrices really are positive definite, all the terms in D will be positive.

    You can't "ignore fillin". For example
    $$\begin{bmatrix}1 & 1 & 1 \\ 1 & 2 & 0 \\ 1 & 0 & 3 \end{bmatrix} =
    \begin{bmatrix} 1 & 0 & 0 \\ 1 & 1 & 0 \\ 1 & -1 & 1 \end{bmatrix}
    \begin{bmatrix} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \end{bmatrix}
    \begin{bmatrix} 1 & 1 & 1 \\ 0 & 1 & -1 \\ 0 & 0 & 1 \end{bmatrix}$$
    The 0 in the original matrix is "filled in" in the decomposition.
     
  4. Mar 15, 2012 #3
    Thanks AlephZero,

    Sorry, I was supposed to explain a little bit more.

    The incomplete decomposition is something special and it is used as a preconditioner in solving systems of equation with (large) sparse symmetric positive definite matrices.

    The matrices are large; usually larger than 50,000 by 50,000, and very sparse; less than 100 non-zero elements in a row. Such systems of equations appear in numerical methods such as finite element method or finite difference method. A complete decomposition is almost impossible due to the size of the matrix and since the system is solved by iterative methods, even an approximation of LDLT helps a lot. The approximation is called incomplete Cholseky decomposition which is in fact the LDLT of symmetric positive definite matrices. In this approximation, the algorithm is done on nonzero elements of the matrix only.

    Since the decomposition itself is an approximation, some change in it may still be all right. Let me explain it by mathematics.

    Imagine the following system of equation:

    [itex] Ax=b[/itex] .if I can find a matrix M so that M-1A≈I , then the equation [itex] M^{-1}Ax=M^{-1}b[/itex] can be solved in much less number of iteration than the original equation. In most cases the iterative method on the original equation doesn't even converge. The incomplete LDLT is good enough for the purpose and due to the sparsity of the matrix, it's computationally affordable.

    When we perform an "incomplete" decomposition of A, then LDL=M≈A also M*(1.2I)≈A.

    Perhaps the negative diagonal elements are in fact the consequence of "no-fill-in". The diagonals are guaranteed to be positive ( for positive definite matrices) for complete decomposition. The incomplete certainly have different result. I'm not clear about the details. The method is widely used in finite element software but I don't know how they overcome the problem. My trick works because I solve the equations is low enough numbers of iteration but I have the fear that in some cases one of diagonal elements become zero. This mistake would be unforgivable.
     
  5. Mar 15, 2012 #4

    AlephZero

    User Avatar
    Science Advisor
    Homework Helper

    OK, now I understand what you meant.

    That seems a bit confused. You can't find "a" matrix M such that M-1A≈I. The inverse of the matrix A is unique. To speed up an iterative solver, you want to find a matrix which is suitably sparse, such that M-1A has a smaller condition number than A, but M is not the inverse of A.

    Correct, the algorithm as you describe it is not guaranteed to work for any matrix.

    This paper http://www.cs.indiana.edu/pub/techreports/TR394.pdf and the references in it might be useful.
     
  6. Mar 15, 2012 #5
    Many thanks.

    Yes it's about reducing the condition number. I don't understand the concept of "condition number" but I think when the matrix becomes more diagonally dominant, the condition number becomes smaller.

    Thanks for the paper.
     
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook




Similar Discussions: Negative diagonal elements in Incomplete Cholesky Decompostion
  1. Choleski Split (Replies: 1)

  2. Cyclic decompostions (Replies: 4)

Loading...