Solutions of Large Sparse Matrix Eqs

  • Thread starter tau1777
  • Start date
  • Tags
In summary: Thanks for the suggestion.In summary, the big problem that AlephZero is having is trying to solve a 35000 by 35000 matrix equation using Newton-Raphson. He has read online that compressed column-formatting can help reduce the storage of the matrix, but he doesn't understand how to operate on the CCF matrix to get the solution. The best reference for him would be a book or article that explains the basics of sparse matrix algorithm in a baby-example scenario.
  • #1
Hi All,

I am trying to solve 5 coupled 1st order PDES on a 100 by 70 grid. So I end up with 35000, eqs. The method I am attempting to solve these eqs with is Newton-Raphson. Which then leads me to compute a Jacobian that is 35000 X 35000. As in Newton-Raphson I have to solve J = v F[x1,x1...x35000]. Where F is the vector of equations and v is the new solution. Then I add this v to my initial guess, and iterate.

Anyways the biggest problem I'm having now is trying to get the solution for this matrix equation J = v F[x1,x1...x35000], given that I have such a large matrix. Now the good thing is the the matrix is sparse. The bad thing is that I don't really know how to deal with that. I've read online about lots of libraries and packages that will solve Sparse matrices for me. But I would like to understand just what they are doing.

So far I understand the basic idea of compressed-column-formatting (CCF) , and how this reduces the storage of the full matrix. What I don't understand is how do you now operate on this CCF matrix, and get the solution to your original matrix equation.

You don't have to go into a long discussion of it if you don't want, I totally understand. Just please point me to a good reference. I've been trying to find such a reference for the past couple of days and its driving me nuts. Everyone just explains the compression formats and then says "here is the following library/software" that solves sparse matrix. But HOW? Please help.

Thanks in advanced.
Technology news on
  • #2
You probably don't really want to learn all the details about doing this efficiently yourself, unless you want to spend a few years studying numerical analysis instead of solving your differential equations.

FWIW my recommendation for a good sparse solver package would be - it includes a few hundred pages of documentation, which might answer some of your questions about "how it works".
  • #3
Hey AlephZero,

Thanks for the quick reply. You're right I certainly don't want all the details. This project I'm working on is a part of my overall physics PhD so I certainly have a couple of years at my disposal to just learn stuff. Nevertheless I probably don't want to learn that much about computation.

That being said, I would still really like a good baby-example of what's going on. I've read some chapters of Numerical Recipes, and they show how to compress a band-diagonal matrix and that was great, but then don't really then explain how to solve it.

I've read about creating matrix-vector-products and computing transposes of these compressed-column-formatted sparse matrices but to what end? Do those things help in the overall solution of Ax =b? Where A is my original sparse matrix?Maybe this documentation will answer those questions.

But if you or anyone else reading, can point me to something that gives me the generic idea of what's going on, and show's the idea with a small example(s), that would be the best.

Thanks again.
  • #4
The basic math behind direct solutions algorithms (LU decomposition, etc) and iterative methods (e,g, Conjugate Gradient) is the same for both full and sparse matrices.

The "clever part" for sparse matrices is in figuring out in advance which elements of the input matrix are guaranteed to be 0 when you carry out the algorithms, finding an efficient way to store as few of the 0's as possible, and not wasting time doing calcations with them.

A good book on how to do this in "simple" cases like banded matrices is
You can probably find it online with Google. The LINPACK library has been superceded by LAPACK now, but the basic ideas haven't changed. It works through examples in detail, and also includes the computer code. It also has a lot of good information about error analysis.

For sparse matrices without so much regular "structure", the first step is reordering the equations to minimize the "fill-in" when the solution algorithm changes zeros in the input matrix into non-zero values. Finding the globally optimium order is an impossibly expensive process. Practical methods range from quick and simple heuristic algorithms, to symbolic (non-numerical) computation based on graph theory. One of the SPOOLES documents describes the various reordering methods that are available in that package.
  • #5

Thanks again for a quick reply. Yes I've starting digging through the documentation of SPOOLES and have found their section on re-ordering. Too be honest it's all going over my head. I was thinking that I would try to install the libraries and play around with the software to get a better idea of what's going on, but even the installation process is scaring me a little.

Anyways I'll try to get they LAPACK user's guide and see if that helps with any of this. I might just need to learn how to do this with Matlab or something, dumber, more at my level. :smile:
  • #6
If you use the "obvious" banded strorage arrangement, you will have 35,000 rows and a semi-bandwidth of about 70 (the size of the grid) x 5 (number of PDEs) x 3 or 5 (the number of rows in your finite difference operator).

Matlab should be able to handle a problem size like 35000 x 1750 without spontaneous combustion, but a more efficient sparse solver might run an order of magnitude faster.

Getting some believable results from a straightforward program is a good first step, even if you need to write a more sophisticated program later.
  • #7

Yes, I definitely think I will go with Matlab or Mathematica for now. Mathematica is preferred b/c the rest of my "code" is written in a Mathematica notebook. I know Matlab is supposed to be better for computational stuff, so I'll eventually try to take it there an see what happens. My biggest problem with these programs, is that I'm not sure if it has the Sparse stuff built in or if I'm supposed to tell it. As of right now solving the matrix equations will take a full 24hrs on Mathematica.

None of this would really matter if I could just figure out how fast the Newton-Raphson would converge. I've read some papers by one of these authors where they worked out a simliar problem, and said they got everything with only 10 iterations. If I could find a speed up in either Mathematica or Matlab that would solve the equation in ~6hrs, instead of 24, and if it required only 10 iterations I'd be in good shape.

Although I have no idea how they made there thing converge in 10 iterations, I read in Numerical Recipes that the number of iterations is comparable to size of the Jacobian, assuming you're using Successive-Over-Relaxation. These researchers used a Jacobian that was 617X617, and only needed 10 iterations? What the heck?

Sorry I'm just ranting at this point. No need to reply. Thanks again for all the help, and comments, I really appreciate it.
  • #8
Hi all --

I second the thanks to AlephZero for useful replies! :)

I am also in a situation where it's desirable to solve a large Ax = b problem, where A is large (N x N with N ~ 10^6) and sparse (nonzero entries are typically < 10 N in number). The thing is, my A has some extra known structure, and I was wondering whether there exist more "targeted" algorithms for this particular case?

First of all, A is symmetric. Secondly, it can be decomposed into two parts, say A = T + Q, where each nonzero entry of A is either completely in T or in Q, and where the following is known:

The matrix T is block-diagonal, where the blocks are of various sizes from 2 x 2 to about 1000 x 1000. Each of these blocks, furthermore, is tridiagonal. So T is itself a tridiagonal matrix, with occasional "narrowings" along the diagonal to give the block structure.

The matrix Q has nonzero entries scattered throughout (except, as noted, where T is nonzero). But by applying a known permutation P^-1 to Q, we get a matrix R, which is itself block-diagonal. The blocks of R are typically also of the order of magnitude 2 x 2 to 1000 x 1000; each row/column usually holds ~ 1 to 5 nonzero entries.

So, to sum up, we have a symmetric A = T + PR, where both T and R are block-diagonal, T is tridiagonal and the blocks of R are themselves usually very sparse (except possibly when they are very small). Can this structure be taken advantage of?

Thanks in advance for any enlightenment!
  • #9
Hey nochmal,

I'm not really sure of any software/library that will necessarily take advantage of this extra structure. However, I can tell you what eventually solved my problem. I just had to tell Mathematica to use a sparse matrix solver. Apparently Mathematica has LAPACK libraries in it. So I entered esstentially was: z = LinearSolve[J,v, Method → Multifrontal]

Anyways, I found this link to be extremely useful :

If you're not using Mathematica I would suggest taking it over there (if that's not too much work) and seeing how long it takes the program to solve your system. If it can do it really fas it might be worth it to take everything into Mahthematica.

All the best.

1. What are large sparse matrix equations?

Large sparse matrix equations are mathematical equations that involve matrices that have a large number of rows and columns, but most of the elements in the matrix are zero. This means that the matrix is mostly empty, hence the term "sparse". These types of equations often arise in scientific and mathematical problems that involve a large amount of data or variables.

2. Why are large sparse matrix equations important?

Large sparse matrix equations are important because they allow us to solve complex problems that involve a large amount of data or variables. They are also commonly used in scientific and engineering applications, such as computational fluid dynamics, structural analysis, and machine learning. Additionally, since the matrices are mostly empty, they require less storage space and computational resources compared to dense matrices.

3. How do you solve large sparse matrix equations?

There are several methods for solving large sparse matrix equations, including direct methods and iterative methods. Direct methods involve solving the equations by manipulating the matrix algebraically, while iterative methods use a trial-and-error approach to converge on a solution. Additionally, specialized software and algorithms, such as sparse matrix solvers, can be used to efficiently solve these types of equations.

4. What are some challenges in solving large sparse matrix equations?

One of the main challenges in solving large sparse matrix equations is the high computational cost. Since the matrices are large and mostly empty, it can take a significant amount of time and resources to solve them. Additionally, the sparsity of the matrices can lead to numerical instability and accuracy issues, which need to be carefully addressed during the solution process.

5. How are large sparse matrix equations used in real-world applications?

Large sparse matrix equations have a wide range of applications in the real world. They are commonly used in scientific and engineering fields for solving complex problems, such as simulating fluid flow, analyzing structures, and predicting material behavior. They are also heavily used in data analysis and machine learning, where large amounts of data and variables need to be processed efficiently.

Similar threads

  • Programming and Computer Science
  • Programming and Computer Science
  • Engineering and Comp Sci Homework Help
  • Programming and Computer Science
  • Calculus and Beyond Homework Help
  • Differential Equations
  • Precalculus Mathematics Homework Help
  • Advanced Physics Homework Help
  • Programming and Computer Science