Negative diagonal elements in Incomplete Cholesky Decompostion

  • Context: Graduate 
  • Thread starter Thread starter Hassan2
  • Start date Start date
  • Tags Tags
    Elements Negative
Click For Summary
SUMMARY

The discussion centers on the challenges of performing incomplete Cholesky decomposition on sparse matrices, specifically addressing the occurrence of negative diagonal elements. The algorithm used avoids square roots and operates on non-zero elements, which can lead to inaccuracies due to "no-fill-in" effects. Participants confirm that while the incomplete decomposition is a useful preconditioner for solving large, sparse symmetric positive definite matrices, it may yield negative diagonal values unless adjustments, such as multiplying diagonal elements by a factor greater than 1.2, are applied to ensure positivity.

PREREQUISITES
  • Understanding of incomplete Cholesky decomposition
  • Familiarity with sparse matrix representations
  • Knowledge of symmetric positive definite matrices
  • Basic concepts of iterative methods in numerical analysis
NEXT STEPS
  • Research the properties of symmetric positive definite matrices
  • Learn about the condition number and its impact on matrix stability
  • Explore advanced techniques in incomplete Cholesky decomposition
  • Study numerical methods for solving large sparse systems, such as finite element and finite difference methods
USEFUL FOR

Mathematicians, data scientists, and engineers working with numerical methods, particularly those involved in solving large sparse systems of equations using incomplete Cholesky decomposition.

Hassan2
Messages
422
Reaction score
5
Dear all,

I'm facing a problem in performing incomplete Cholesky decomposition of sparse matrices. I use the algorithm which avoids talking square roots, given in here: http://en.wikipedia.org/wiki/Cholesky_decomposition and apply it to sparse matrices by just skipping the zeros ( no filling). My matrices are certainly symmetric and positive definite but the algorithm sometimes give negative diagonal elements. I have two questions regarding this:

1. Is this the consequence of ignoring zero elements in the sparse matrices or is due to error accumulation?

2. In order to go around the problem, I multiply the diagonal values by a number larger than 1.0, before decomposition. For small matrices, 1.05 may be enough but as the matrix sizes grows, the factor is not large enough. However with factor 1.2, the algorithm has never given negative diagonal values even for matrices of 1000,000 by 1000,000. Does anyone have any explanation as why the number helps and what is the smallest number which assure all positive diagonal elements?

Your help would be appreciated.
 
Physics news on Phys.org
By "incomplete decomposition" do you mean ##LDL^T## decomposition?

Changing the diagonal terms of the matrix to "make it work" doesn't make any sense to me. You can easily check if your code is correct by multiplying the ##LDL^T## matrices numerically and comparing with the original matrix.

If your matrices really are positive definite, all the terms in D will be positive.

You can't "ignore fillin". For example
$$\begin{bmatrix}1 & 1 & 1 \\ 1 & 2 & 0 \\ 1 & 0 & 3 \end{bmatrix} =
\begin{bmatrix} 1 & 0 & 0 \\ 1 & 1 & 0 \\ 1 & -1 & 1 \end{bmatrix}
\begin{bmatrix} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \end{bmatrix}
\begin{bmatrix} 1 & 1 & 1 \\ 0 & 1 & -1 \\ 0 & 0 & 1 \end{bmatrix}$$
The 0 in the original matrix is "filled in" in the decomposition.
 
Thanks AlephZero,

Sorry, I was supposed to explain a little bit more.

The incomplete decomposition is something special and it is used as a preconditioner in solving systems of equation with (large) sparse symmetric positive definite matrices.

The matrices are large; usually larger than 50,000 by 50,000, and very sparse; less than 100 non-zero elements in a row. Such systems of equations appear in numerical methods such as finite element method or finite difference method. A complete decomposition is almost impossible due to the size of the matrix and since the system is solved by iterative methods, even an approximation of LDLT helps a lot. The approximation is called incomplete Cholseky decomposition which is in fact the LDLT of symmetric positive definite matrices. In this approximation, the algorithm is done on nonzero elements of the matrix only.

Since the decomposition itself is an approximation, some change in it may still be all right. Let me explain it by mathematics.

Imagine the following system of equation:

Ax=b .if I can find a matrix M so that M-1A≈I , then the equation M^{-1}Ax=M^{-1}b can be solved in much less number of iteration than the original equation. In most cases the iterative method on the original equation doesn't even converge. The incomplete LDLT is good enough for the purpose and due to the sparsity of the matrix, it's computationally affordable.

When we perform an "incomplete" decomposition of A, then LDL=M≈A also M*(1.2I)≈A.

Perhaps the negative diagonal elements are in fact the consequence of "no-fill-in". The diagonals are guaranteed to be positive ( for positive definite matrices) for complete decomposition. The incomplete certainly have different result. I'm not clear about the details. The method is widely used in finite element software but I don't know how they overcome the problem. My trick works because I solve the equations is low enough numbers of iteration but I have the fear that in some cases one of diagonal elements become zero. This mistake would be unforgivable.
 
Hassan2 said:
Thanks AlephZero,

Sorry, I was supposed to explain a little bit more.

The incomplete decomposition is something special and it is used as a preconditioner in solving systems of equation with (large) sparse symmetric positive definite matrices.

The matrices are large; usually larger than 50,000 by 50,000, and very sparse; less than 100 non-zero elements in a row. Such systems of equations appear in numerical methods such as finite element method or finite difference method. A complete decomposition is almost impossible due to the size of the matrix and since the system is solved by iterative methods, even an approximation of LDLT helps a lot. The approximation is called incomplete Cholseky decomposition which is in fact the LDLT of symmetric positive definite matrices. In this approximation, the algorithm is done on nonzero elements of the matrix only.
OK, now I understand what you meant.

Imagine the following system of equation:

Ax=b .if I can find a matrix M so that M-1A≈I , then the equation M^{-1}Ax=M^{-1}b can be solved in much less number of iteration than the original equation.

That seems a bit confused. You can't find "a" matrix M such that M-1A≈I. The inverse of the matrix A is unique. To speed up an iterative solver, you want to find a matrix which is suitably sparse, such that M-1A has a smaller condition number than A, but M is not the inverse of A.

Perhaps the negative diagonal elements are in fact the consequence of "no-fill-in".
Correct, the algorithm as you describe it is not guaranteed to work for any matrix.

This paper http://www.cs.indiana.edu/pub/techreports/TR394.pdf and the references in it might be useful.
 
Many thanks.

Yes it's about reducing the condition number. I don't understand the concept of "condition number" but I think when the matrix becomes more diagonally dominant, the condition number becomes smaller.

Thanks for the paper.
 

Similar threads

  • · Replies 1 ·
Replies
1
Views
2K
Replies
2
Views
3K
  • · Replies 1 ·
Replies
1
Views
5K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 3 ·
Replies
3
Views
3K
  • · Replies 83 ·
3
Replies
83
Views
22K
  • · Replies 2 ·
Replies
2
Views
2K
  • · Replies 12 ·
Replies
12
Views
2K
  • · Replies 39 ·
2
Replies
39
Views
13K
  • · Replies 1 ·
Replies
1
Views
2K