Efficiently Computing Eigenvalues of a Sparse Banded Matrix

In summary, the conversation discusses the use of a Hamiltonian represented by a penta-diagonal matrix and the search for an efficient algorithm or routine to find the eigenvalues and eigenvectors. The suggestion of using LAPACK, specifically dsyev for symmetric matrices, and dsbev for symmetric band matrices is mentioned. However, it is noted that there may not be much improvement as LAPACK does not take advantage of sparse matrices. Another option mentioned is using arpack, but the documentation is difficult to follow.
  • #1
Jimmy and Bimmy
4
0
I have a Hamiltonian represented by a penta-diagonal matrix

The first bands are directly adjascent to the diagonals. The other two bands are N places above and below the diagonal.

Can anyone recommend an efficient algorithm or routine for finding the eigenvalues and eigenvectors?
 
Physics news on Phys.org
  • #3
DrClaude said:
Thanks for the reply.

I have been using dsyev from the lapack routine (which is for symmetric matrices). I switched to dsbev (which is for symmetric band matrices), but didn't see much improvement. This is because, even though I only have 5 bands (diagonal + 2 upper + 2 lower), the outer 2 bands are a good distance away, and lapack doesn't take advantage of spare matrices.

Someone recommended arpack, but the documentation can be hard to follow.
 

What is a sparse banded matrix?

A sparse banded matrix is a type of matrix where the majority of the elements are zero, with non-zero elements clustered around the main diagonal. This results in a matrix with a band-like structure, hence the name "banded".

Why is it important to efficiently compute eigenvalues of a sparse banded matrix?

Eigenvalues are an important mathematical concept that are used in a variety of applications, including data analysis and machine learning. Sparse banded matrices are commonly encountered in these fields, and being able to efficiently compute their eigenvalues allows for faster and more accurate analysis.

What techniques are commonly used to efficiently compute eigenvalues of a sparse banded matrix?

Some commonly used techniques include the Lanczos algorithm, the Arnoldi iteration, and the Jacobi-Davidson method. These methods are specifically designed for sparse matrices and take advantage of their unique structure to reduce the computational complexity of computing eigenvalues.

How can parallel computing be used to further improve the efficiency of computing eigenvalues of a sparse banded matrix?

Since sparse banded matrices often have large dimensions, parallel computing can be used to split the computation across multiple processors or machines. This allows for faster computation and can greatly improve the efficiency of computing eigenvalues.

Are there any limitations to efficiently computing eigenvalues of a sparse banded matrix?

While there are efficient techniques for computing eigenvalues of sparse banded matrices, they may not be suitable for matrices with extremely large dimensions or for matrices with a highly irregular band structure. In these cases, other techniques may need to be used or the matrix may need to be approximated.

Similar threads

  • Atomic and Condensed Matter
Replies
0
Views
372
Replies
1
Views
1K
  • Atomic and Condensed Matter
Replies
22
Views
3K
  • Advanced Physics Homework Help
Replies
5
Views
2K
  • Quantum Physics
Replies
2
Views
967
  • Atomic and Condensed Matter
Replies
4
Views
1K
  • Linear and Abstract Algebra
Replies
1
Views
1K
  • Programming and Computer Science
Replies
1
Views
2K
  • Calculus and Beyond Homework Help
Replies
5
Views
2K
Replies
5
Views
912
Back
Top