Inverting big matrices. REALLY BIG

  • Context: Graduate 
  • Thread starter Thread starter Okefenokee
  • Start date Start date
  • Tags Tags
    Matrices
Click For Summary
SUMMARY

The discussion focuses on efficient methods for inverting extremely large sparse matrices, specifically a 1 gazillion by 1 gazillion matrix represented in the form A*x = b. Key techniques mentioned include Gaussian elimination and the use of block matrix structures to facilitate parallel programming. The importance of numerical stability and the impact of I/O operations on performance are also highlighted, emphasizing the need for algorithms that optimize both computation and data handling.

PREREQUISITES
  • Understanding of sparse matrix representation
  • Familiarity with Gaussian elimination techniques
  • Knowledge of parallel programming concepts
  • Basic principles of numerical stability in algorithms
NEXT STEPS
  • Research efficient algorithms for sparse matrix inversion
  • Explore block matrix multiplication techniques
  • Learn about parallel computing frameworks for matrix operations
  • Investigate numerical stability methods for large-scale computations
USEFUL FOR

Mathematicians, data scientists, and software engineers working with large-scale matrix computations, particularly those involved in optimization and parallel processing of sparse matrices.

Okefenokee
Messages
245
Reaction score
13
Inverting big matrices. REALLY BIG!

How is it done?

Let's say I have a sparsely populated 1 gazillion by 1 gazillion square matrix in a formula like this A*x = b. What sort of efficient methods exist to do the following?: find the rank, invert it if it has full rank, find the null vectors if it does not have full rank.

Also, it would be great if the technique lends itself to parallel programing. It's for a pet project that I'm playing with. I can think of a couple ways to do this but I'm sure my solution would be sloppy compared to what is already out there. One of my ideas was to package the inner matrices into manageable blocks and do some Gaussian elimination in stages.
 
Physics news on Phys.org
It all depends on we can use further properties of the matrix. In general I would look for an efficient algorithm for matrix multiplication. I'm not sure where the current record holder for the matrix exponent lies, but I guess numerical stability is more of a problem than to implement an algorithm which only works for really big matrices, the more as the I/O operations outperform calculation times by far.
 

Similar threads

  • · Replies 4 ·
Replies
4
Views
3K
  • · Replies 3 ·
Replies
3
Views
5K
  • · Replies 1 ·
Replies
1
Views
2K
Replies
14
Views
6K
  • · Replies 3 ·
Replies
3
Views
2K
  • · Replies 105 ·
4
Replies
105
Views
14K
  • · Replies 9 ·
Replies
9
Views
3K
  • · Replies 8 ·
Replies
8
Views
5K
  • · Replies 9 ·
Replies
9
Views
4K
  • · Replies 12 ·
Replies
12
Views
11K