# Working with million by million matrices?

Hi,

Is anyone here working with (sparse) matrices of size million by million? If so, I would like to know what software you use and any special techniques employed.

PS: I am currently working a project where I need to find eigen value of huge matrices. The best I have been able to do so far, is with Matlab for sparse matrix with size around 5x105 by 5x105.

Pythagorean
Gold Member
More important, I think, is what OS and hardware you're running. Here's what MATLAB says about that for their software:

http://www.mathworks.com/support/tech-notes/1100/1110.html

looks like an 8TB system can't even do what you're trying, your total elements are going to be 25e25!!!! 8TB system maxes at 2.8e14

Mark44
Mentor
Pythagorean, you might have missed that the OP said "sparse matrix".

I did a web search using "sparse matrix eigenvalue" and got a lot of hits, including a matlab toolbox utility for finding eigenvalues in sparse matrices.

Pythagorean
Gold Member
Ah, good catch. In that case, from scratch, you could simply model the full matrix by making a smaller matrix of positions that represent the horizontal and vertical component of the non-zero elements with a value (thus a 3xn matrix where n is the number of non-zero elements)

If it's a symmetric matrix (the element in 2,1 is the same as 2,1) then you only need to worry about the diagonal and the upper (or lower triangle).

You'd have to figure out an algorithm to find the eigenvalues from these position vectors.

Hi,

Thanks a lot for your replies! I am using sparse matrices and iterative algorithms to find the eigen value. But I run into issues with regards to memory and also the convergence time for the algorithm, when the matrix becomes too large.

So I was wondering if any one used specialized stuff (like parallel programming in Matlab) or used a different language entirely. Am I the only one doing this? :)

Pythagorean
Gold Member
I use parfor loops. That won't help with memory though. All parfor loops do is wide processor bandwidth. I assume that's true of most of the parallel programming tools.

If you're running out of RAM, you will most likely find the solution in your algorithm.

How are you representing your sparse matrix, what's the line look like when you pre-allocate or define it?

You can try this at the command prompt:

profile on
(then run your code, let it error out)
profile off
profile view

that should open up a profile viewer that tells you where you code may be having trouble based on execution times.

AlephZero
Homework Helper
The key thing here is using most approproate eigensolution algorithm for the type of matrix you have and the number of eigenvalues/vectors you want. What programming language you use is a fairly minor issue. The speed up between the "best" algorithm and an unsuitable one could be several orders of magnitude, not the factor of 2 or 3 you might get from parallelizing the code on a quad core PC.

You haven't told us enough about the problem to give any specific sugesstions, but this looks like a reasonable overview of several software libraries: http://www.grycap.upv.es/slepc/documentation/reports/str6.pdf

Thanks for the suggestions. Will check it out.

I am trying to find a steady state solution (i.e the eigen vector with eigen value 0 of the matrix). So I am trying to get only one eigen vector. I am using the JDQR algorithm and code from this website

http://www.win.tue.nl/casa/research/topics/jd/software.html [Broken]

With eigs I can go upto matrix sizes of 10^4 x 10^4. With this algorithm I have been successful up to size of 10^6 x 10^6. But after this it take a long time (may be weeks) for the program to converge.

Last edited by a moderator:
AlephZero
Homework Helper
You probably need to experiment to find the best restart strategy, and a good preconditioning matrix (if possible).

If know you have exactly one zero eignevalue, you might think about just doing inverse power iteration and ignoring the fact that your matrix is mathmeatically singular. Start with a random vector, and it should converge within 1 or 2 interations. The EISPACK library used to do that to find the eigenvectors when the eigenvalues were already known. That would shift the problem from "doing an eigensolution" to "finding a sparse matrix solver that works well on your data."

Mech_Engineer
Gold Member
On a completely different path, you might take a look at adapting an FEA code of some kind. Finite element analysis is after all just huge matrix math, and it's not uncommon to have models with 1e6 nodes (and hence a raw stiffness matrix size of 3e6 x 3e6). Edit: FEA may have some unique requirements for the processing of matrices though... requirements which your matrices may or may not meet.

Here's an interesting article about being able to access ANSYS's underlying math capabilities: http://www.padtinc.com/blog/post/2012/02/23/using-apdl-math.aspx

What is APDL Math?

It is an extension to the APDL command language that drives MAPDL. Although it runs in a different workspace (chunk of memory in the ANSYS database) it talks to standard APDL by importing and exporting APLD arrays (vectors or matrices). It consists, at R14, of 18 commands that can be executed at the /SOLU level at any time. All of the commands start with a * character and look and act like standard APDL commands.

...

*EIGEN, Kmatrix, Mmatrix, Cmatrix, Evals, Evects

Performs a modal solution with unsymmetric or damping matrices.

Overview of ANSYS APDL: http://www1.ansys.com/customer/content/documentation/130/ans_apdl.pdf [Broken]

Last edited by a moderator:
Thanks a lot for the suggestions!

I have tried parallelizing the code and found it was not helpful. The overhead of splitting the matrices to different workers is much much higher than the actual calculations.

I also tried the inverse power iteration. Works reasonably well for small matrices. But for large matrices it takes more time to find the inverse than to find the eigen values.

I haven't checked FEA tools yet. Will get back with my further tries.

AlephZero
Homework Helper
I also tried the inverse power iteration. Works reasonably well for small matrices. But for large matrices it takes more time to find the inverse than to find the eigen values.

Don't find the inverse. Just solve the equations. The inverse of a sparse matrix is usually NOT sparse. Explicitly finding the inverse of a matrix is almost always the wrong thing to do numerically.

Don't find the inverse. Just solve the equations.

Can you please elaborate how to do this? Say I have a 10 x 10 matrix A (which has numerical values) and a 10 x 1 eigen vector x that I am trying to find. Taking Ax = b, I will have 10 equations. How do I solve for x without taking the inverse in matlab? Sorry, but I haven't really done it any other way.

Can you just give enough of a hint about the form of your sparse matrix that someone could create a few small or medium sized samples that are similar enough that someone can try to give you some actual examples of doing this? Instead of just guessing, I tried generating a few of my own, but from the results I am guessing that the form I'm coming up with must be different enough from the form you have that what I would say would mean nothing. "I've got some stuff and some stuff is wrong, what do I do?" makes it much more difficult to try to give an answer that is likely to be helpful or correct.

We DON'T need a million by million example matrix, but something smaller and with the essential structure that you have might be a very helpful example.

Sorry I got stuck with course work and other stuff.

My matrix is a square matrix but non-Hermitian and has complex values. Thus non-symmetric. The minimum one I am using is a 81x81 matrix. I have attached the matlab matrix variable. For bigger matrices, you can think of repeating this matrix as a block along the diagonal.

#### Attachments

• M1.mat.zip
2.4 KB · Views: 139
AlephZero