Negative diagonal elements in Incomplete Cholesky Decompostion

  • Context: Graduate 
  • Thread starter Thread starter Hassan2
  • Start date Start date
  • Tags Tags
    Elements Negative
Click For Summary

Discussion Overview

The discussion revolves around the challenges faced in performing incomplete Cholesky decomposition on sparse symmetric positive definite matrices. Participants explore the implications of negative diagonal elements that arise during the decomposition process and the potential causes, including error accumulation and the handling of zero elements in sparse matrices.

Discussion Character

  • Technical explanation
  • Debate/contested
  • Mathematical reasoning

Main Points Raised

  • One participant describes their use of an algorithm for incomplete Cholesky decomposition that avoids square roots and skips zeros, leading to negative diagonal elements.
  • Another participant questions the validity of modifying diagonal elements to avoid negatives, suggesting that if the matrices are truly positive definite, the diagonal terms should remain positive.
  • Some participants propose that the negative diagonal elements may result from the "no-fill-in" approach of the incomplete decomposition, which differs from complete decompositions that guarantee positive diagonals.
  • A participant explains the context of using incomplete Cholesky decomposition as a preconditioner in solving large sparse systems of equations, emphasizing its computational efficiency despite the approximation.
  • There is a discussion about the condition number of matrices and its relation to diagonal dominance, with one participant expressing confusion about the concept.
  • A later reply references a paper that may provide additional insights into the issues discussed.

Areas of Agreement / Disagreement

Participants express differing views on the appropriateness of modifying diagonal elements and the implications of negative values. There is no consensus on the best approach to handle the issues arising from incomplete Cholesky decomposition.

Contextual Notes

The discussion highlights limitations related to the assumptions of positive definiteness and the impact of sparsity on the decomposition process. The exact conditions under which the algorithm may fail to produce positive diagonal elements remain unresolved.

Hassan2
Messages
422
Reaction score
5
Dear all,

I'm facing a problem in performing incomplete Cholesky decomposition of sparse matrices. I use the algorithm which avoids talking square roots, given in here: http://en.wikipedia.org/wiki/Cholesky_decomposition and apply it to sparse matrices by just skipping the zeros ( no filling). My matrices are certainly symmetric and positive definite but the algorithm sometimes give negative diagonal elements. I have two questions regarding this:

1. Is this the consequence of ignoring zero elements in the sparse matrices or is due to error accumulation?

2. In order to go around the problem, I multiply the diagonal values by a number larger than 1.0, before decomposition. For small matrices, 1.05 may be enough but as the matrix sizes grows, the factor is not large enough. However with factor 1.2, the algorithm has never given negative diagonal values even for matrices of 1000,000 by 1000,000. Does anyone have any explanation as why the number helps and what is the smallest number which assure all positive diagonal elements?

Your help would be appreciated.
 
Physics news on Phys.org
By "incomplete decomposition" do you mean ##LDL^T## decomposition?

Changing the diagonal terms of the matrix to "make it work" doesn't make any sense to me. You can easily check if your code is correct by multiplying the ##LDL^T## matrices numerically and comparing with the original matrix.

If your matrices really are positive definite, all the terms in D will be positive.

You can't "ignore fillin". For example
$$\begin{bmatrix}1 & 1 & 1 \\ 1 & 2 & 0 \\ 1 & 0 & 3 \end{bmatrix} =
\begin{bmatrix} 1 & 0 & 0 \\ 1 & 1 & 0 \\ 1 & -1 & 1 \end{bmatrix}
\begin{bmatrix} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \end{bmatrix}
\begin{bmatrix} 1 & 1 & 1 \\ 0 & 1 & -1 \\ 0 & 0 & 1 \end{bmatrix}$$
The 0 in the original matrix is "filled in" in the decomposition.
 
Thanks AlephZero,

Sorry, I was supposed to explain a little bit more.

The incomplete decomposition is something special and it is used as a preconditioner in solving systems of equation with (large) sparse symmetric positive definite matrices.

The matrices are large; usually larger than 50,000 by 50,000, and very sparse; less than 100 non-zero elements in a row. Such systems of equations appear in numerical methods such as finite element method or finite difference method. A complete decomposition is almost impossible due to the size of the matrix and since the system is solved by iterative methods, even an approximation of LDLT helps a lot. The approximation is called incomplete Cholseky decomposition which is in fact the LDLT of symmetric positive definite matrices. In this approximation, the algorithm is done on nonzero elements of the matrix only.

Since the decomposition itself is an approximation, some change in it may still be all right. Let me explain it by mathematics.

Imagine the following system of equation:

Ax=b .if I can find a matrix M so that M-1A≈I , then the equation M^{-1}Ax=M^{-1}b can be solved in much less number of iteration than the original equation. In most cases the iterative method on the original equation doesn't even converge. The incomplete LDLT is good enough for the purpose and due to the sparsity of the matrix, it's computationally affordable.

When we perform an "incomplete" decomposition of A, then LDL=M≈A also M*(1.2I)≈A.

Perhaps the negative diagonal elements are in fact the consequence of "no-fill-in". The diagonals are guaranteed to be positive ( for positive definite matrices) for complete decomposition. The incomplete certainly have different result. I'm not clear about the details. The method is widely used in finite element software but I don't know how they overcome the problem. My trick works because I solve the equations is low enough numbers of iteration but I have the fear that in some cases one of diagonal elements become zero. This mistake would be unforgivable.
 
Hassan2 said:
Thanks AlephZero,

Sorry, I was supposed to explain a little bit more.

The incomplete decomposition is something special and it is used as a preconditioner in solving systems of equation with (large) sparse symmetric positive definite matrices.

The matrices are large; usually larger than 50,000 by 50,000, and very sparse; less than 100 non-zero elements in a row. Such systems of equations appear in numerical methods such as finite element method or finite difference method. A complete decomposition is almost impossible due to the size of the matrix and since the system is solved by iterative methods, even an approximation of LDLT helps a lot. The approximation is called incomplete Cholseky decomposition which is in fact the LDLT of symmetric positive definite matrices. In this approximation, the algorithm is done on nonzero elements of the matrix only.
OK, now I understand what you meant.

Imagine the following system of equation:

Ax=b .if I can find a matrix M so that M-1A≈I , then the equation M^{-1}Ax=M^{-1}b can be solved in much less number of iteration than the original equation.

That seems a bit confused. You can't find "a" matrix M such that M-1A≈I. The inverse of the matrix A is unique. To speed up an iterative solver, you want to find a matrix which is suitably sparse, such that M-1A has a smaller condition number than A, but M is not the inverse of A.

Perhaps the negative diagonal elements are in fact the consequence of "no-fill-in".
Correct, the algorithm as you describe it is not guaranteed to work for any matrix.

This paper http://www.cs.indiana.edu/pub/techreports/TR394.pdf and the references in it might be useful.
 
Many thanks.

Yes it's about reducing the condition number. I don't understand the concept of "condition number" but I think when the matrix becomes more diagonally dominant, the condition number becomes smaller.

Thanks for the paper.
 

Similar threads

  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 17 ·
Replies
17
Views
7K
  • · Replies 1 ·
Replies
1
Views
2K
Replies
2
Views
3K
  • · Replies 1 ·
Replies
1
Views
5K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 4 ·
Replies
4
Views
3K
  • · Replies 6 ·
Replies
6
Views
3K
  • · Replies 3 ·
Replies
3
Views
3K
  • · Replies 83 ·
3
Replies
83
Views
22K