Solving Gen. Eigen Probs w/ Real Sym Indefinite A & Definite B

In summary, the speaker has a problem with their system creating a real symmetric negative definite matrix, but due to the implementation of the algorithm and/or finite-precision of floating point, the matrix comes out indefinite. This causes problems when trying to solve a generalized eigenvalue problem, as the matrices involved are not definite and require inefficient methods to solve. The speaker is looking for a way to regularize the matrix B into a definite matrix without compromising the original eigenvalue problem. However, the other speaker suggests that the issue may be due to numerical errors and suggests using a shifting method or incorporating error bounds into the optimization procedure to mitigate the issue.
  • #1
Born2bwire
Science Advisor
Gold Member
1,780
24
I have a system that ideally creates a real symmetric negative definite matrix. However, due to the implementation of the algorithm and/or finite-precision of floating point, the matrix comes out indefinite. For example, in a 2700 square matrix, four eigenvalues are positive, the rest are negative. This is a problem because I want to solve a generalized eigenvalue problem and the fact that neither of my matrices are definite forces me to use very inefficient methods. If it was just a simple eigenvalue problem then I could just do a simple shift, but I need to solve for the problem

[tex]\mathbf{A}\mathbf{x}=\lambda\mathbf{B}\mathbf{x}[/tex]

A is real symmetric indefinite and B should be real symmetric negative definite but limitations seem to push it to indefinite. I know I could do matrix multiplication by the transpose of the matrices but then B gets pushed to semi-definite which still is not valid for these methods (Looking at Lanczos which uses Cholesky decomposition on B if it is Hermitian definite).

Does anyone know of a way I could regularize B into a definite matrix without compromising the original eigenvalue problem?
 
Physics news on Phys.org
  • #2


No offense but your problem is originated from a numerical algorithm and you want to keep it rigorous after this error without using [itex]B-\varepsilon I \prec 0[/itex]? I don't see why, since it should come out negative definite anyway. Why are you continuing using the wrong B? Looks like another SeDuMi problem :)
 
  • #3


There isn't anything inherently wrong with the matrix of B. The results derived from B are correct, in terms of the solutions afforded by B in the larger problem and the results I wish to achieve from the eigenvalues. There isn't anything that I can feasibly do modify in the creation of B. I believe that the main problem arises from finite precision and the numerical procedures used to generate the elements. I am already using double precision and there isn't anything I can do about the numerical methods since the elements involve a double surface integration that must be estimated via quadrature.
 
  • #4


Yes I agree, but the matrix B should be negative definite when it comes out of the algorithm according to the information you provided. No? So pushing it back to negative definite is something you have to do anyway. Otherwise you just assume that it is almost correct and you are stuck with nonoptimized code. Why don't you just shift it and try to get an error bound for this or put it in your optimization procedure maybe like

[tex]\begin{align*}
\min_x & \ \ \epsilon \\
s.t. &\mathbf{A}\mathbf{x}=\lambda\left(\mathbf{B}-\epsilon I\right)\mathbf{x}\\
&\mathbf{B}-\epsilon I \prec 0\\
&\epsilon > 0
\end{align*}
[/tex]
 

What is the process for solving general eigenvalue problems with real symmetric indefinite matrix A and definite matrix B?

The process for solving these types of problems involves several steps. First, the matrices A and B must be decomposed into their eigenvalue and eigenvector components. This can be done using techniques such as the Cholesky decomposition or the Singular Value Decomposition (SVD). Next, the eigenvalue components of A and B are combined to form a new matrix C. Finally, the eigenvalue and eigenvector components of C are used to solve the final eigenvalue problem.

What are the applications of solving general eigenvalue problems with real symmetric indefinite matrix A and definite matrix B?

There are many applications of solving these types of problems, including in physics, engineering, and computer science. For example, in physics, these types of problems can be used to model the behavior of quantum systems or to analyze the dynamics of mechanical systems. In engineering, they can be used to understand the behavior of structures under different loading conditions. In computer science, they can be used in data analysis and machine learning algorithms.

What is the significance of using real symmetric indefinite matrix A and definite matrix B in eigenvalue problems?

The use of real symmetric indefinite matrix A and definite matrix B in eigenvalue problems has several advantages. Firstly, these types of matrices often arise in real-world applications and therefore solving problems involving them is highly relevant. Secondly, these matrices have special properties that make them easier to work with, such as having real eigenvalues and orthogonal eigenvectors. This allows for more efficient and accurate calculations.

What are the challenges associated with solving general eigenvalue problems with real symmetric indefinite matrix A and definite matrix B?

One of the main challenges with these types of problems is the computation of the eigenvalue and eigenvector components of the matrices A and B. This can be a computationally intensive process, especially for larger matrices. Additionally, care must be taken in handling the indefinite matrix A, as it may have both positive and negative eigenvalues, which can complicate the solution process.

Are there any other techniques for solving general eigenvalue problems with real symmetric indefinite matrix A and definite matrix B?

Yes, there are other techniques that can be used to solve these types of problems. Some common techniques include the Jacobi method, the QR algorithm, and the Lanczos algorithm. Each of these methods has its own advantages and disadvantages, and the choice of which one to use will depend on the specific problem at hand and the resources available for computation.

Similar threads

  • Linear and Abstract Algebra
Replies
8
Views
1K
  • Linear and Abstract Algebra
Replies
1
Views
2K
  • Linear and Abstract Algebra
Replies
4
Views
4K
  • Math Proof Training and Practice
Replies
2
Views
2K
  • Linear and Abstract Algebra
Replies
12
Views
4K
  • Linear and Abstract Algebra
Replies
5
Views
9K
  • Calculus and Beyond Homework Help
Replies
3
Views
779
Replies
5
Views
887
  • Linear and Abstract Algebra
Replies
1
Views
2K
Replies
2
Views
5K
Back
Top