# The eigenvalue power method for quantum problems

• A
• hilbert2
In summary, the classical "power method" is a method for solving one special eigenvalue of an operator in a finite-dimensional vector space. It involves choosing a random trial vector and operating on it multiple times with the operator, resulting in a vector that is nearly parallel to the eigenvector with the largest absolute value. This method is more problematic in an infinite-dimensional vector space, but a new Arxiv preprint claims to have found a way to calculate all eigenvalues without this issue. The method uses a functional of the operator and can freely select a solution by varying a parameter associated with an estimate of the eigenvalue. It has been shown to be in good agreement with analytical results. The main advantage is that it avoids the need for (
hilbert2
Gold Member
TL;DR Summary
Repeated acting on a quantum state, with an operator related to the Hamiltonian, can apparently allow solving of all energy eigenvalues of the system.
The classical "power method" for solving one special eigenvalue of an operator works, in a finite-dimensional vector space, as follows: suppose an operator ##\hat{A}## can be written as an ##n\times n## matrix, and its unknown eigenvectors are (in Dirac bra-ket notation) ##\left|\psi_1 \right.\rangle,\dots\left|\psi_n \right.\rangle##. Now, choose a random trial vector ##\left|\psi_t \right.\rangle##, constructed so that it's unlikely to be exactly orthogonal with any of the eigenvectors ##\left|\psi_k \right.\rangle##. Then when you operate on ##\left|\psi_t \right.\rangle## many times with operator ##\hat{A}##, the result is

##\hat{A}^m \left|\psi_t \right.\rangle = \hat{A}^m \left( c_1 \left|\psi_1 \right.\rangle + \dots + c_n \left|\psi_n \right.\rangle \right) = c_1 a_{1}^{m} \left|\psi_1 \right.\rangle + \dots + c_n a_{n}^m \left|\psi_n \right.\rangle##,

and when ##m## is a large integer, the resulting vector is very nearly parallel with the eigenvector ##\left|\psi_k \right.\rangle## for which the eigenvalue ##a_k## has the largest absolute value ##|a_k |##. Other eigenvalues can be obtained by shifting the operator ##\hat{A}## as ##\hat{A} \rightarrow \hat{A} + \lambda \hat{I}## before this calculation. There the ##\hat{I}## is the identity operator and ##\lambda \in \mathbb{R}##.

As far as I know, using this method is more problematic in an infinite-dimensional vector space, such as when calculating the eigenstates of the Hamiltonian for a quantum mechanical system. For instance, suppose you attempt to find the ground state wave function of a hydrogen atom by choosing a trial function ##\psi_t (x,y,z)## and form the function ##\hat{H}^m \psi_t (x,y,z)##, where ##\hat{H}## is the hydrogenic atom Hamiltonian. In an infinite-dimensional vector space, it's not guaranteed that a vector/state is still normalizable after acting on it with an operator, so this solution scheme doesn't necessarily work properly. This can even still be a problem if you're able to choose the initial function ##\psi_t (x,y,z)## so that it's orthogonal with all scattering state eigenfunctions that have eigenvalues ##E>0##. This problem seems to be connected to the "boundedness" of an operator in Hilbert space, as descibed on pages 453-457 of Sadri Hassani's Mathematical physics: A Modern Introduction to its Foundations.

A new Arxiv preprint, still not peer-reviewed, claims to describe a way to calculate the ##\it{all}## energy eigenvalues of a quantum system with this computational tool, without the infinite-dimensionality being a problem:

https://arxiv.org/pdf/2211.06303.pdf

##\bf{Abstract}##

We present a new power method to obtain solutions of eigenvalue
problems. The method can determine not only the dominant or lowest eigenvalues
but also all eigenvalues without the need for a deflation procedure. The method uses
a functional of an operator (or a matrix). The method can freely select a solution by
varying a parameter associated to an estimate of the eigenvalue. The convergence of
the method is highly dependent on how closely the parameter to the eigenvalues. In
this paper, numerical results of the method are shown to be in excellent agreement
with the analytical ones.

Does this seem legit to you others? It seems that this calculation only requires repeated applying of an operator on a state, similar to a matrix-vector multiplication in finite dimensions, which is much less computationally intensive than normal solution of an eigenvalue problem. I haven't been studying every possible way of solving quantum eigenvalue problems recently, to be able to see how large an improvement this is to existing methods.

Last edited:
gentzen
The method seem plausible to me. The previous methods either worked with polynomials, favoring the highest eigenvalues, or with exponentials, favoring lowest eigenvalues. The method of the paper basically creates a peak in a density function over eigenvalues at a non-zero finite value of the energy, so it should blow up the components nearest the peak value. I'm not generally familiar with these techniques but I imagine the degeneracy of eigenvalues is difficult to detect.

topsquark
This is a good paper, with well explained, elaborated, and numerically tested new ideas. The main advantage is that it avoids the need for (shifted) inverse operations. What is needed is a symmetric (or Hermitian) matrix (or operator), and a good enough estimation of its smallest and biggest eigenvalue. Let me explain why: an estimation of the smallest eigenvalue is needed, because the function ##E \exp(-E/E_p)## is nicely bounded for positive ##E## with a peak at ##E_p##, but is unbounded for negative ##E## and grows (or rather falls) quickly. An estimation of the largest eigenvalue is needed, because the natural number ##M## in the approximation ##E\exp(-E/E_p) \approx E (1-E/(E_pM))^M## should better be choosen such that ##E_pM \geq E_N##, where ##E_N## is the largest eigenvalue.

Because the smallest and biggest eigenvalue can be estimated by the normal power method, the main limitation of this method is that it only works for matrices (or operators) with real eigenvalues. And its convergence might be slower than the shifted inverse power method. But in cases where (shifted) inverse operations are only available at high computational cost, it could still win in practice.

hilbert2

Replies
5
Views
1K
Replies
2
Views
891
Replies
3
Views
1K
Replies
14
Views
1K
Replies
9
Views
2K
Replies
8
Views
3K
Replies
9
Views
2K
Replies
11
Views
1K
Replies
16
Views
2K
Replies
8
Views
1K