Uncovering Surprising Properties of the Cholesky Decomposition

In summary, the Cholesky decomposition can be defined as X=AB, where A is lower triangular. Generally Y=BA is not X, but Y seems to be a positive definite matrix. The pair (X,Y) has special properties, such as being a sequence of positive definite matrices and having a map from psd matrix to another X -> Y -> Z. This sequence seems to converge to a diagonal matrix, which could potentially be used to rank matrices. The Cholesky sequence in M(2x2) has specific properties, but it is not expected to lead to any major scientific breakthroughs.
  • #1
amateur82
2
0
I noticed a funny thing. The Cholesky decomposition can be defined as X=AB, where A is lower triangular. Generally Y=BA is not X, but Y seems to be a positive definite matrix. I wonder if there is any special properties to the pair (X,Y). I know that a positive definite matrix can be interpreted as a metric. So a pair of conjugate metrics?

It is also funny that the product AB is not commutative. You would think so, since A=B'. So when you map by X, first you turn to direction B, and then to orthogonal direction. For some reason this seems to be completely different than turning first to orthogonal direction and then to direction B...

edit: played around more, and found out, that it's not actually a pair, but a sequence of positive definite matrices! chol(Y) doesn't involve A and B, but some other triangular matrices. So a map from psd matrix to another X -> Y -> Z... does anybody know where this sequence leads?
 
Last edited:
  • #3
A diagonal matrix is trivially a fixed point of this process. (As for a diagonal matrix, A=B, and consequently, X=Y.) Numerical experiments show that all these sequences seem to converge to a diagonal matrix, yet the rate of convergence is different, even for matrices of same dimension. So perhaps this could be used as a 'grade' to rank matrices. For 2x2 matrices, the map is easy to give explicitly. If we denote an arbitrary member of sequence as X(t), we have

x(t+1)11 = x(t)11 + x(t)122 / x(t)11
x(t+1)12 = (x(t)12 / √x(t)11)√(x(t)22 - x(t)122 / x(t)11)
x(t+1)22 = x(t)11 - x(t)122 / x(t)11.

So the map contracts x(t+1)22 and increases x(t+1)11 uniformly, and maintains the sign of x(t+1)12. This wasn't true for off-diagonals of larger matrices, at least according to numerical trials. It would probably be easy to derive other properties for the Cholesky seqeunece in M(2x2); please feel free to try. I'm not expecting any major scientific breakthrough from here!
 

1. What is the Cholesky decomposition?

The Cholesky decomposition is a matrix factorization method used to decompose a symmetric, positive definite matrix into the product of a lower triangular matrix and its conjugate transpose.

2. What are the main applications of the Cholesky decomposition?

The Cholesky decomposition is commonly used in numerical linear algebra for solving linear systems of equations, calculating determinants and inverses of matrices, and generating random numbers with specified covariance matrices.

3. How does the Cholesky decomposition differ from other matrix factorization methods?

The Cholesky decomposition is unique in that it only works on symmetric, positive definite matrices. Other matrix factorization methods, such as LU decomposition, can be used on a wider range of matrices but may not be as efficient for solving certain problems.

4. What are some surprising properties of the Cholesky decomposition?

The Cholesky decomposition has several surprising properties, including the fact that it can be used to efficiently solve certain types of optimization problems and that it can be extended to non-square matrices through the use of pseudo-inverses.

5. How does the Cholesky decomposition contribute to the field of machine learning?

In machine learning, the Cholesky decomposition is used for tasks such as principal component analysis and covariance estimation. It is also a key component in algorithms for training neural networks and performing matrix factorization for recommender systems.

Similar threads

Replies
13
Views
2K
  • Linear and Abstract Algebra
Replies
1
Views
2K
  • Linear and Abstract Algebra
Replies
1
Views
1K
  • Linear and Abstract Algebra
Replies
8
Views
2K
  • General Math
Replies
12
Views
1K
  • Calculus and Beyond Homework Help
Replies
3
Views
818
  • Linear and Abstract Algebra
Replies
1
Views
925
  • Special and General Relativity
3
Replies
78
Views
4K
  • Calculus and Beyond Homework Help
Replies
2
Views
3K
  • Linear and Abstract Algebra
Replies
1
Views
2K
Back
Top