Solving Gen. Eigen Probs w/ Real Sym Indefinite A & Definite B

Born2bwire
Science Advisor
Gold Member
Messages
1,780
Reaction score
24
I have a system that ideally creates a real symmetric negative definite matrix. However, due to the implementation of the algorithm and/or finite-precision of floating point, the matrix comes out indefinite. For example, in a 2700 square matrix, four eigenvalues are positive, the rest are negative. This is a problem because I want to solve a generalized eigenvalue problem and the fact that neither of my matrices are definite forces me to use very inefficient methods. If it was just a simple eigenvalue problem then I could just do a simple shift, but I need to solve for the problem

\mathbf{A}\mathbf{x}=\lambda\mathbf{B}\mathbf{x}

A is real symmetric indefinite and B should be real symmetric negative definite but limitations seem to push it to indefinite. I know I could do matrix multiplication by the transpose of the matrices but then B gets pushed to semi-definite which still is not valid for these methods (Looking at Lanczos which uses Cholesky decomposition on B if it is Hermitian definite).

Does anyone know of a way I could regularize B into a definite matrix without compromising the original eigenvalue problem?
 
Physics news on Phys.org


No offense but your problem is originated from a numerical algorithm and you want to keep it rigorous after this error without using B-\varepsilon I \prec 0? I don't see why, since it should come out negative definite anyway. Why are you continuing using the wrong B? Looks like another SeDuMi problem :)
 


There isn't anything inherently wrong with the matrix of B. The results derived from B are correct, in terms of the solutions afforded by B in the larger problem and the results I wish to achieve from the eigenvalues. There isn't anything that I can feasibly do modify in the creation of B. I believe that the main problem arises from finite precision and the numerical procedures used to generate the elements. I am already using double precision and there isn't anything I can do about the numerical methods since the elements involve a double surface integration that must be estimated via quadrature.
 


Yes I agree, but the matrix B should be negative definite when it comes out of the algorithm according to the information you provided. No? So pushing it back to negative definite is something you have to do anyway. Otherwise you just assume that it is almost correct and you are stuck with nonoptimized code. Why don't you just shift it and try to get an error bound for this or put it in your optimization procedure maybe like

\begin{align*}<br /> \min_x &amp; \ \ \epsilon \\<br /> s.t. &amp;\mathbf{A}\mathbf{x}=\lambda\left(\mathbf{B}-\epsilon I\right)\mathbf{x}\\<br /> &amp;\mathbf{B}-\epsilon I \prec 0\\<br /> &amp;\epsilon &gt; 0<br /> \end{align*}<br />
 
I asked online questions about Proposition 2.1.1: The answer I got is the following: I have some questions about the answer I got. When the person answering says: ##1.## Is the map ##\mathfrak{q}\mapsto \mathfrak{q} A _\mathfrak{p}## from ##A\setminus \mathfrak{p}\to A_\mathfrak{p}##? But I don't understand what the author meant for the rest of the sentence in mathematical notation: ##2.## In the next statement where the author says: How is ##A\to...
##\textbf{Exercise 10}:## I came across the following solution online: Questions: 1. When the author states in "that ring (not sure if he is referring to ##R## or ##R/\mathfrak{p}##, but I am guessing the later) ##x_n x_{n+1}=0## for all odd $n$ and ##x_{n+1}## is invertible, so that ##x_n=0##" 2. How does ##x_nx_{n+1}=0## implies that ##x_{n+1}## is invertible and ##x_n=0##. I mean if the quotient ring ##R/\mathfrak{p}## is an integral domain, and ##x_{n+1}## is invertible then...
The following are taken from the two sources, 1) from this online page and the book An Introduction to Module Theory by: Ibrahim Assem, Flavio U. Coelho. In the Abelian Categories chapter in the module theory text on page 157, right after presenting IV.2.21 Definition, the authors states "Image and coimage may or may not exist, but if they do, then they are unique up to isomorphism (because so are kernels and cokernels). Also in the reference url page above, the authors present two...
Back
Top