# Fortran eigenvalue decomposition

1. Jan 19, 2012

### brydustin

I've been trying to invert a real symmetric matrix and the inverse that I compute via eigenvalue decomposition is not the inverse (using QV^-1Q^T), the stranger thing is that QVQ^T gets back my orginal matrix matrix. Even more unusual is that the matrix starts off at approximately identity (in general it won't be), and so the analytical eigenvalue-eigenvector decomposition should be I = I*I*I, but instead V~=I (the eigenvalues) and Q is just a random orthogonal matrix. V is roughly I but when computing the inverse (which should also be I ) it all goes bad. I thought that I could switch over to the cholesky decomposition because the matrix will be positive definite always (for any iteration), but I really don't want to do anything artificial on the first iteration (i.e. if its first iteration, set the inverse equal to the matrix). Plus I need to compute the determinant of this matrix as well which I thought was most practically compute as a product of the eigenvalues. I don't think that I should have to compromise, but its not working.....

2. Jan 20, 2012

### AlephZero

What you got for Q is not too surprising. Ignoring rounding errors, if you have two identical eigenvalues, the corresponding two vectors willl span the 2-dimensional subspace, but the two vectors can be any linear combinations of two basis vectors for the subspace.

When rounding errors occur, this can also happen when two eigenvalues are very close but (theoretically) not identical. The individual eigenvalues may be very poorly determined, but their orthogonality properties will be well conditioned.

I would check numerically if your Q matrix really is orthonormal (i.e. Q Q^T = I), though the fact that "QVQ^T gets back my orginal matrix" suggests that it is.

Also, try multiplying your "wrong" inverse by the original matrix and see what you get. The result might be quite close to I, even though the inverse "looks obviously wrong".

The real question here might be "why you want to calculate an explicit inverse at all." Doing that is almost always the wrong way to implement a numerical algorithm, even though it's often a neat way to write down the math.

3. Jan 21, 2012

### DuncanM

Computing a matrix inverse and determinant using the eigenvalues is quite a complicated and roundabout way of doing things--unless you happen to have some code handy that you want to use. Otherwise, the usual method is to do an LU-Decomposition. Once that is done, so many other tasks become easy: solving an [A](x) = (b) system, computing the inverse, computing the determinant, etc. And there is a lot of code available for doing these tasks.

Is this program for a school assignment?

Do you have to use FORTRAN?

4. Jan 21, 2012

### brydustin

No its not a school assignment, I'm a researcher.
Do I have to use Fortran... yes, unfortunately.
And yeah, I decided that LU was the best route to go, thanks for the input.

5. Jan 21, 2012

### DuncanM

How is that (the LU-Decomposition) going?

Have you checked out the NETLIB site? There is a lot of good code there--in FORTRAN.

I have some experience with the older routines (LINPACK).
The LINPACK routine for an LU-Decomposition is SGEFA.F.
Source code is posted on the NETLIB site here:

SGEFA.F

There is also an online utility based on this code that you might find useful:

Simultaneous Equation Solver

If you are not solving a system, but just want the determinant and inverse, just enter a dummy vector for (b). It works pretty good, and is a convenient way to get some quick numbers.

If you'd rather use a routine more modern than the LINPACK routine, there are the LAPACK routines:

SGETRF.F for single precision, and

DGETRF.F for double precision.

They are both written in FORTRAN, so they should be easy for you to use in your own program.

Hope this helps.

6. Jan 21, 2012

### brydustin

Yes thank you very much. I'm actually trying to find the following code right now:
NLPQLP
Its to solve the non-linear constraint problem (optimization with lagrange multipliers).
This post was because I wanted to invert my lagrangian hessian, however I don't really understand what to do next? However, NLPQLP seems to be a nice blackbox program that would solve it... unfortunately, I can't find the source code nor do I know what package it comes in (LAPACK, LINPACK, etc.)

7. Jan 21, 2012

### DuncanM

I don't think it is posted on the NETLIB site.

The only mention of it on the entire NETLIB site is here:

http://www.netlib.org/na-digest-html/09/v09n37.html

Perhaps make a post in the LAPACK Support Forums and see if they can offer some suggestions.

I also notice that Northwestern University has a page on optimization software that might be helpful:

http://optimization.eecs.northwestern.edu/software/

And Wikipedia has a list too:

http://en.wikipedia.org/wiki/List_of_optimization_software

Many of these products offer free versions for academic use.