Diagonalization Algorithm for Large Matrices: Any Suggestions?

Click For Summary
Fast algorithms for diagonalizing large, sparse symmetric matrices, particularly those with dimensions ranging from 300x300 to millions, are discussed. The conversation highlights that these matrices often arise in fields like electron-phonon interactions. While diagonalization is a common approach, it may not be efficient for very large matrices due to computational complexity. Instead, iterative methods like the Lanczos method and Inverse Power Iteration are recommended for finding the smallest eigenvalues, as they are more suitable for sparse matrices. The discussion also touches on the importance of using libraries such as LAPACK, BLAS, and Intel MKL for numerical computations, with specific challenges noted for implementation in C++ on Windows systems. The need for eigenvectors alongside eigenvalues is addressed, confirming that many algorithms can efficiently compute both. Users are encouraged to explore resources and community support for troubleshooting coding issues related to these algorithms.
NeoDevin
Messages
334
Reaction score
2
Does anyone here know of any fast algorithms to diagonize large, symmetric matrices, that are mostly zeros? (by large I mean 300x300 up to several million by several million)
 
Technology news on Phys.org
Is this a special [named] class of sparse matrices?
For more general matrix types, you might look at LAPACK/BLAS or the Intel MKL.

Where do these matrices arise?
There might be some additional symmetry that can be exploited.
 
They arise in electron phonon interactions in a lattice. I don't know enough about matrices to know if there is a special kind of symmetry involved...
 
What are you actually trying to do here? Solve a set of equations? Or something else?

It could be that literally "diagonalizing the matrix" isn't the best way to go, if the original matrix is very sparse. An iterative solution method may be faster than a direct method.

300x300 is a trivial sized problem whatever method you use. Several million variables may be feasible or may not, depending on the problem and the amount of computing power you have available.
 
I'm trying to find the lowest eigenvalue (for now, eventually I may need other eigenvalues.). I figured that diagonalizing was the easiest way to go.
 
never taken a numerical methods/analsysis course?

Try the QR/QL methods to see if they fit your needs, there are faster ones to handle symmetric versions but my mind's drawing a blank. if you need int he millions you might want to look for a parallelziing version of what ever algorithm you use.

www.nr.com
 
robphy said:
For more general matrix types, you might look at LAPACK/BLAS or the Intel MKL.

I'm looking at them now. I'm trying to get them to work for me with C++, but not having any luck. I'm using Dev C++ on windows xp. If you have any advice on how to get it working, that would be very helpful.
 
NeoDevin said:
I'm looking at them now. I'm trying to get them to work for me with C++, but not having any luck. I'm using Dev C++ on windows xp. If you have any advice on how to get it working, that would be very helpful.

Ideally, the libraries should be made visible to the compiler.
I could never get things to work so smoothly (when I tried them a while back).

I have been working in C with gcc in cygwin.
What worked for me was to locate the .c sources for LAPACK and include the relevant ones [based on my application] on the gcc command line along with my main program... adding by trial-and-error what is needed to successfully compile and link into an executable.
Using a hack [I once found via google], I was also able to use an older version of the Intel MKL with cygwin's gcc.

I'm not sure if any of these schemes work with Dev C++.

You might try: http://www.google.com/search?q=devcpp+lapack
 
  • #10
There exist algorithms specifically for finding the smallest eigenvalue of a matrix. A good place to start might be

http://en.wikipedia.org/wiki/Eigenvalue_algorithm

I'm sure there exists an algorithm designed specifically for finding the smallest eigenvalue of a sparse symmetric matrix, but I didn't see one with a quick web search.
 
  • #11
NeoDevin said:
I'm trying to find the lowest eigenvalue (for now, eventually I may need other eigenvalues.). I figured that diagonalizing was the easiest way to go.

Diagonalizing will give you all the eigenvalues. That is not a practical option for problems matrices bigger than about 500x500. For anything bigger than that, it will run very slowly (run time proportional to matrix size cubed) and/or the numerical conditioning will be too poor to get any answers at all.

That rules out QR/QL type methods (except on a 300x300 size problem) because they always calculate all the eigenvalues.

The best general-purpose algorithm for calculating a few eigenpairs of a large matrix is the Lanczos method. That needs a matrix factorization and equation solving algorithm for your sparse matrices.

A simpler alternative, if you just want the lowest eigenpair and the low eigenvalues are well separated, is the Inverse Power Iteration method.

Unless you want to learn a lot about linear algebra computations and have a couple of years to spare, don't try to code your own versions of a sparse solver or the Lanczos method from scratch.

http://www.netlib.org/linalg/spooles/spooles.2.2.html is a reputable place to start.

To give you an idea of what is feasible, I once calculated the lowest 3000 eigenpairs of a 300,000 x 300,000 sparse system successfully - though it took about 3 months to get the problem formulated properly, and the run time on a 4-processor Silicon Graphics box was about 3 days.

At the other end of the size range, run time for a 300x300 size problem would be of the order of < 1 sec.
 
  • #12
I'm working further now, and it seems that I will be needing the eigenvectors corresponding to the lowest eigenvalues. Do these algorithms return these as well? or is there another algorithm to get them from it?

I'm going to look at the algorithms suggested now.
 
  • #13
yes they will give yout eh eigenvectors(depending on the algo)...
have you tried C-lapack/Atlas (atlas i the platform independent version of blas, mkl the intel version). It might take you 3 hours to compile atlas i think to suit your HD needs.
 
  • #14
NeoDevin said:
I'm working further now, and it seems that I will be needing the eigenvectors corresponding to the lowest eigenvalues. Do these algorithms return these as well? or is there another algorithm to get them from it?

I'm going to look at the algorithms suggested now.
If they don't, finding an associated eigenvector is an easy problem: it's just a nullvector of (A - vI).
 
Last edited:
  • #15
Hurkyl said:
If they don't, finding an associated eigenvector is an easy problem: it's just a nullvector of (A - vI).

Well, that's easy mathematics, but it's expensive computation if you have to factorise A-vI from scratch and A has several million rows and columns :rolleyes:

But the good news is, every eigenvalue method that I know of has an efficient way of calculating the corresponding vectors, so it's not a problem.

In fact some methods calculcate the vectors first, then find the values as a final step (e.g. using a Rayleigh quotient v = \frac{x^TAx}{x^Tx})
 
  • #16
Ok, I'm getting almost nowhere trying to get these algorithms working. If someone has time, could you walk me through how to get the Lanczos algorithm working with Dev C++ on Windows XP, or give me a link to a website which will explain it in detail for someone who hasn't been programming long?

All the instructions I can find are for Unix/Linux users...

Thanks in advance.
Devin
 
  • #17
It can be tricky sometimes to get these things working on Windows. You may find it easier to install Cygwin on your Win desktop. Also, have you tried looking at an interface like:

http://sourceforge.net/projects/lpp/ ?
 
  • #18
I have an algorithm in fortran which works on a unix system, but when I compile it on my windows machine using g77 in cygwin, it compiles fine, but the program doesn't work. It runs, but doesn't do anything. Any ideas how to fix that, other than switching?

PS. the algorithm is dnlaso.f from http://www.netlib.org/laso/

It works when I compile it on one, but not on the other...
 

Similar threads

Replies
14
Views
3K
  • · Replies 6 ·
Replies
6
Views
2K
  • · Replies 1 ·
Replies
1
Views
3K
  • · Replies 11 ·
Replies
11
Views
4K
Replies
9
Views
3K
  • · Replies 30 ·
2
Replies
30
Views
6K
  • · Replies 2 ·
Replies
2
Views
3K
  • · Replies 5 ·
Replies
5
Views
2K
  • · Replies 2 ·
Replies
2
Views
2K
Replies
43
Views
6K