- #1
- 1,780
- 24
I have a system that ideally creates a real symmetric negative definite matrix. However, due to the implementation of the algorithm and/or finite-precision of floating point, the matrix comes out indefinite. For example, in a 2700 square matrix, four eigenvalues are positive, the rest are negative. This is a problem because I want to solve a generalized eigenvalue problem and the fact that neither of my matrices are definite forces me to use very inefficient methods. If it was just a simple eigenvalue problem then I could just do a simple shift, but I need to solve for the problem
[tex]\mathbf{A}\mathbf{x}=\lambda\mathbf{B}\mathbf{x}[/tex]
A is real symmetric indefinite and B should be real symmetric negative definite but limitations seem to push it to indefinite. I know I could do matrix multiplication by the transpose of the matrices but then B gets pushed to semi-definite which still is not valid for these methods (Looking at Lanczos which uses Cholesky decomposition on B if it is Hermitian definite).
Does anyone know of a way I could regularize B into a definite matrix without compromising the original eigenvalue problem?
[tex]\mathbf{A}\mathbf{x}=\lambda\mathbf{B}\mathbf{x}[/tex]
A is real symmetric indefinite and B should be real symmetric negative definite but limitations seem to push it to indefinite. I know I could do matrix multiplication by the transpose of the matrices but then B gets pushed to semi-definite which still is not valid for these methods (Looking at Lanczos which uses Cholesky decomposition on B if it is Hermitian definite).
Does anyone know of a way I could regularize B into a definite matrix without compromising the original eigenvalue problem?