How do I numerically find eigenvectors for given eigenvalues?

Click For Summary

Discussion Overview

The discussion centers on the numerical calculation of eigenvectors corresponding to given eigenvalues for square matrices. Participants explore various methods and algorithms, including the QR algorithm, singular value decomposition (SVD), and other numerical techniques relevant to linear algebra.

Discussion Character

  • Technical explanation
  • Mathematical reasoning
  • Debate/contested

Main Points Raised

  • One participant describes successfully using the QR algorithm to find eigenvalues and seeks guidance on finding corresponding eigenvectors.
  • Another participant suggests solving the eigenvalue equation directly to find eigenvectors, referencing the equation Av_i = a_iv_i.
  • A different participant elaborates on an algorithm involving QR decomposition and provides a code example that successfully computes eigenvalues and eigenvectors.
  • One participant mentions that many programming libraries have built-in functions for computing eigenvalues and eigenvectors, specifically referencing gnu-octave's eig function.
  • Another participant proposes using singular value decomposition (SVD) as an alternative method, noting its complexity but widespread availability in programming libraries.
  • A later reply discusses the relationship between SVD and QR methods, suggesting that inverse iteration could be a straightforward approach to find eigenvectors when eigenvalues are known.
  • Another participant emphasizes that while SVD is commonly recommended, methods like Lanczos iteration are also effective for finding eigenpairs of large matrices.

Areas of Agreement / Disagreement

Participants present multiple competing views on methods for finding eigenvectors, with no consensus on a single approach. Various techniques are discussed, each with its own merits and complexities.

Contextual Notes

Some methods mentioned depend on specific programming environments or libraries, and there may be limitations regarding the types of matrices or the conditions under which these methods are effective.

hkBattousai
Messages
64
Reaction score
0
My aim was to numerically calculate eigenvalues and eigenvectors for a square A matrix.

I managed to find the eigenvalues by using QR algorithm. Now, I can find all of the eigenvalues for any given square matrix. But, for the next step, how do I find the corresponding eigenvectors? Is there any numerical method which calculates the eigenvectors for given eigenvalues?

Please guide me.
 
Physics news on Phys.org
You write out the eigenvalue equation and find the vector that satisfies it for each value.

If Av_i = a_iv_i for the ith eigenvector, then solve (A-Ia_i)v_i = 0 normally.
 
Last edited:
Simon Bridge said:
You write out the eigenvalue equation and find the vector that satisfies it for each value.

If Av_i = a_iv_i for the ith eigenvector, then solve (A-Ia_i)v_i = 0 normally.

Your idea was very useful, but I found an alternative solution (page 18).

The algorithm:
S_0=Q_0
Defactorize: \, A_n=Q_nR_n
A_{n+1}=R_nQ_n
S_{n+1}=S_nQ_n

As the algorithm converges, A_n become a diagonal matrix, whose diagonal elements give the eigenvalues. And the column vectors of S_n gives the corresponding eigenvectors.

This is an implementation example:
Code:
template <class T>
void Matrix<T>::GetEigens(std::vector<T> & EigenValues, Matrix<T> & EigenVectors) throw(MatrixNotSquare)
{
	// Initializations
	Matrix<T> A = *this, Q, R;
	if (!A.IsSquare()) throw(MatrixNotSquare(L"The matrix must be a square."));
	EigenValues.clear();
	EigenVectors = Matrix<T>(m_unRowSize, m_unColSize);

	// Find eigenvalues and eigenvectors
	A.QRDecomposition(Q, R);
	A = R * Q;
	EigenVectors = Q;
	for (uint64_t i=0; i<ITERATIONS; i++)
	{
		A.QRDecomposition(Q, R);
		A = R * Q;
		EigenVectors *= Q;
		if (A.IsDiagonal()) break;
	}
	for (uint64_t i=0; i<A.GetRowSize(); i++)
	{
		EigenValues.push_back(A(i, i));
	}
}

This code is running successfully and giving correct results. A sample output is attached.
 

Attachments

  • Code Output.png
    Code Output.png
    1.6 KB · Views: 1,275
Yep - the algorith does what I described.
When you asked for a numerical method, you failed to specify your constraints.

Most programming math libraries have a defined function to find the eigenvalues and eigenvectors of a matrix.

For gnu-octave, there is a built-in function:

Code:
[b]Loadable Function: [V, LAMBDA] = eig (A)[/b]
     The eigenvalues (and eigenvectors) of a matrix are computed in a
     several step process which begins with a Hessenberg decomposition,
     followed by a Schur decomposition, from which the eigenvalues are
     apparent.  The eigenvectors, when desired, are computed by further
     manipulations of the Schur decomposition.

     The eigenvalues returned by `eig' are not ordered.

I used to use this to solve the Schrödinger equation in 1D.
 
Another approach is to use singular value decomposition. It's a beast to program, but because it is so very handy (so very, very, very handy), someone has inevitably done it for you already. Pick a language / tool and you will almost certainly find an SVD implementation -- even if you use a language such as Visual Basic and Cobol that is hardly ever used for scientific programming.
 
D H said:
Another approach is to use singular value decomposition.

In practice that may be a slightly recursive answer, because the a popular way to calculate the singular values and vectors is actually using QR (adapted to solve that specific problem, of course).

(But D.H. is right that the easy answer to most "how to" questions in numerical linear algebra starts "First find the SVD.")

If you know an eigenvalue, the simplest way to find the vector is to use inverse iteration (a.k.a. the inverse power method with shifts) which will converge in one interation because you already know exactly what shift to use.

On the other hand for finding a few eigenpairs of a large matrix the most popular methods iterate to find the eigenvectors, and the eigenvalues are then be found from the Rayleigh quotient x^T A x / x^T x. But a "goto" method like Lanczos iteration is also a beast to program so it works reliably in practice, even though the math looks deceptively simple.
 
eg. http://www.mathworks.com/help/techdoc/ref/svd-singular-value-decomposition.html .

There are libs for major programming languages and math scripts which provide all these methods.

In the QR method - the eigenvectors are the product of the orthogonal transformation in each iteration. Which is what the Olver (example code post #3) paper does.
 
Last edited by a moderator:

Similar threads

  • · Replies 33 ·
2
Replies
33
Views
2K
  • · Replies 5 ·
Replies
5
Views
3K
  • · Replies 13 ·
Replies
13
Views
4K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 5 ·
Replies
5
Views
5K
  • · Replies 10 ·
Replies
10
Views
3K
  • · Replies 1 ·
Replies
1
Views
3K
  • · Replies 4 ·
Replies
4
Views
3K
  • · Replies 3 ·
Replies
3
Views
4K
  • · Replies 16 ·
Replies
16
Views
2K