Lanczos Method: Calculating Smallest Eigenvalues

  • Thread starter Thread starter Hassan2
  • Start date Start date
  • Tags Tags
    Method
Click For Summary
The Lanczos method is recognized as a simpler alternative to the subspace iteration for calculating eigenvalues, but it typically focuses on the largest eigenvalues. To compute the smallest eigenvalues, one can find the eigenvalues of the inverse matrix instead, using techniques like factorization. Issues with orthogonality can lead to repeated eigenvalues in the results, which may stem from numerical errors in iterative solvers. While there are various solutions in the literature to address these challenges, finding a suitable method can be complex and requires a deep understanding of the specific matrices involved. Ultimately, some users may find that subspace iteration suffices for their needs.
Hassan2
Messages
422
Reaction score
5
Having done with subspace iteration method, I think now I can understand and implement the Lanczos method. In fact the lanczos algorithm is simpler than the subspace, but it seems to me that the common Lanczos algorithms give the largest eigenvalues rather than smalles ones. That being true, does anyone have a reference for the case of calculating smallest eigenvalues?

Your help would be highly appreciated.
 
Physics news on Phys.org
Just find the eignenvalues of the inverse of the matrix. The eigenvectors of the matrix and its inverse are the same.

You don't invert the matrix explicitly. Instead of multiplying a vector by the matrix to calculate ##y = Ax## you solve the equiations ##Ay = x##. (You want to factorize ##A = LL^T## or ##A = LDL^T## before you start the iterations, or course).

If ##A## is singular, you can add a constant multiple of the unit matrix before you factorize it. If you are finding the eigenvalues ##\lambda_i## of ##(A + cI)^{-1}##, the eigenvalues of ##A## are ##1 / \lambda_i - c##.

If you are solving a generalized eigenproblem ##Kx = \lambda Mx##, and K is singular, you add a multiple of ##M## to ##K## instead of a multiple of ##I##.
 
Thanks. It works now, but there is a problem which I think is due to deviation from orthogonality: I have many instances of the same eigenvalue in the final list of eigenvalues. Or perhaps it's due the the numerical error introduced when solving the equation Ax_{k+1}=x_{k} ( I use an iterative solver).
 
Hassan2 said:
Thanks. It works now, but there is a problem which I think is due to deviation from orthogonality: I have many instances of the same eigenvalue in the final list of eigenvalues.

Remember my post when I said "you really really don't want to write your own Lanczos routine?" You are just start the adventure game of finding out why, if you keep trying!

Warning: I spent a long time on this several years ago, and gave up after several months because my code was never "100% reliable" on new problems.

If you read the literature, there are lots of ways that claim to fix this problem - but only if you can find a a paper that works for your matrices, AND you can understand it!

Have fun.
 
Yes I remember well. I think I should give up too. Perhaps the subspace iteration is enough for my purpose.

Thanks for the advice.
 
I am studying the mathematical formalism behind non-commutative geometry approach to quantum gravity. I was reading about Hopf algebras and their Drinfeld twist with a specific example of the Moyal-Weyl twist defined as F=exp(-iλ/2θ^(μν)∂_μ⊗∂_ν) where λ is a constant parametar and θ antisymmetric constant tensor. {∂_μ} is the basis of the tangent vector space over the underlying spacetime Now, from my understanding the enveloping algebra which appears in the definition of the Hopf algebra...

Similar threads

Replies
3
Views
3K
  • · Replies 2 ·
Replies
2
Views
2K
  • · Replies 6 ·
Replies
6
Views
6K
  • · Replies 5 ·
Replies
5
Views
2K
Replies
2
Views
2K
  • · Replies 3 ·
Replies
3
Views
1K
  • · Replies 8 ·
Replies
8
Views
2K
  • · Replies 15 ·
Replies
15
Views
3K
  • · Replies 12 ·
Replies
12
Views
2K
  • · Replies 0 ·
Replies
0
Views
2K