A Large scale eigenvalue problem solver

jollage
Messages
61
Reaction score
0
Hi, I'm wondering what eigenvalue problem solver you are using? I'm looking for an one which could solve a very large eigenvalue problem, the matrices being ~ 100,000*100,000. Do you have any advices?

Thanks.
 
Physics news on Phys.org
jollage said:
Hi, I'm wondering what eigenvalue problem solver you are using? I'm looking for an one which could solve a very large eigenvalue problem, the matrices being ~ 100,000*100,000. Do you have any advices?

Thanks.

EISPACK is one set of routines which can be used to find eigenvalues. Your particular problem is challenging because of the size of the system you want to solve. In general, most routines use some form of iteration to obtain estimates of the eigenvalues, so find the biggest, fastest computer you can and be prepared to wait for the results.
 
SteamKing said:
EISPACK is one set of routines which can be used to find eigenvalues. Your particular problem is challenging because of the size of the system you want to solve. In general, most routines use some form of iteration to obtain estimates of the eigenvalues, so find the biggest, fastest computer you can and be prepared to wait for the results.

It would also be helpful if you could take advantage of any special properties of the array. For example, are the array entries real or complex? Is the array symmetric or Hermitian? Is the array sparse? etc etc. Several routines are written to make the most of these features and provide faster execution times. So if you could identify one or two special features of the array, and then pick the routine appropriate for that particular feature, the work would go a lot faster.

Also, to re-state what "SteamKing" posted, the bigger and faster the computer, the better.
Just out of curiousity, some time ago I wanted to find out what the largest size matrix my PC could handle (type double). I think I made it up to about 400 x 400 before my PC couldn't handle it. And that was just to define the matrix. I did not even get around to doing anything with it. If you are using matrices of size 100000 x 100000, you are going to need a pretty high-end computer.

All the best in your endeavors.
 
And with such a large system, you might run into roundoff problems, which might generate meaningless garbage eigenvalues should your efforts lead to results. I'm with DuncanM: you should consider the numerical analysis aspects of this problem carefully before investing a lot of time into trying to crank out a solution.
 
jollage, as the other posters mention, you ought to use one that uses whatever special features your matrix has, like being symmetric. Another feature is sparsity. Your matrix has 1010 components, while a sparse matrix can be specified with much less memory as a list of (row #, column #, value).

Also, what do you need the eigenvalues for? Does your problem need all the eigenvalues of that matrix? Or does it need only the few largest or few smallest? You can save a lot of computation by computing only some subset.
 
I guess this is a bit late, but such big problems are typically solved using subspace methods like Davidson iteration[*]. For sizes far beyond 100k rows, it is normally impractical to calculate all eigenvectors, and, in fact, it may be impractical even to compute or store the matrix to eigen-solve itself. For this reason standard black-box methods as found, say, in LAPACK (e.g., based on divide&conquer diagonalization or the multiple relatively robust representations) are often not the method of choice.

In such large cases, often only a few eigenvectors are required, and most practical algorithms are based around the operation r = H * c, i.e., the contraction of the operator to a trial vector "c" (or set of trial vectors) in which the matrix representation of operator H is never needed. The subsequent intermediate results of H * c are then used to iteratively improve the trial solution, usually by using the result to construct a new "trial" vector and adding it to an iterative subspace ("Krylov subspace"), in which the problem is solved exactly. In order to use such methods efficiently, however, additional information about the operator is usually required (e.g., some form of useful preconditioner), so they are not black-box methods.

[*]: (For some reason beyond my comprehension, many people also use the Lanczos algorithm for finding eigenvectors. If you think of doing this: Don't. Use Davidson-Jacobi. It is much superior in every way.)
 
##\textbf{Exercise 10}:## I came across the following solution online: Questions: 1. When the author states in "that ring (not sure if he is referring to ##R## or ##R/\mathfrak{p}##, but I am guessing the later) ##x_n x_{n+1}=0## for all odd $n$ and ##x_{n+1}## is invertible, so that ##x_n=0##" 2. How does ##x_nx_{n+1}=0## implies that ##x_{n+1}## is invertible and ##x_n=0##. I mean if the quotient ring ##R/\mathfrak{p}## is an integral domain, and ##x_{n+1}## is invertible then...
The following are taken from the two sources, 1) from this online page and the book An Introduction to Module Theory by: Ibrahim Assem, Flavio U. Coelho. In the Abelian Categories chapter in the module theory text on page 157, right after presenting IV.2.21 Definition, the authors states "Image and coimage may or may not exist, but if they do, then they are unique up to isomorphism (because so are kernels and cokernels). Also in the reference url page above, the authors present two...
When decomposing a representation ##\rho## of a finite group ##G## into irreducible representations, we can find the number of times the representation contains a particular irrep ##\rho_0## through the character inner product $$ \langle \chi, \chi_0\rangle = \frac{1}{|G|} \sum_{g\in G} \chi(g) \chi_0(g)^*$$ where ##\chi## and ##\chi_0## are the characters of ##\rho## and ##\rho_0##, respectively. Since all group elements in the same conjugacy class have the same characters, this may be...
Back
Top