# How do I numerically find eigenvectors for given eigenvalues?

1. Dec 20, 2011

### hkBattousai

My aim was to numerically calculate eigenvalues and eigenvectors for a square A matrix.

I managed to find the eigenvalues by using QR algorithm. Now, I can find all of the eigenvalues for any given square matrix. But, for the next step, how do I find the corresponding eigenvectors? Is there any numerical method which calculates the eigenvectors for given eigenvalues?

2. Dec 20, 2011

### Simon Bridge

You write out the eigenvalue equation and find the vector that satisfies it for each value.

If $Av_i = a_iv_i$ for the ith eigenvector, then solve $(A-Ia_i)v_i = 0$ normally.

Last edited: Dec 20, 2011
3. Dec 21, 2011

### hkBattousai

Your idea was very useful, but I found an alternative solution (page 18).

The algorithm:
$S_0=Q_0$
$Defactorize: \, A_n=Q_nR_n$
$A_{n+1}=R_nQ_n$
$S_{n+1}=S_nQ_n$

As the algorithm converges, $A_n$ become a diagonal matrix, whose diagonal elements give the eigenvalues. And the column vectors of $S_n$ gives the corresponding eigenvectors.

This is an implementation example:
Code (Text):
template <class T>
void Matrix<T>::GetEigens(std::vector<T> & EigenValues, Matrix<T> & EigenVectors) throw(MatrixNotSquare)
{
// Initializations
Matrix<T> A = *this, Q, R;
if (!A.IsSquare()) throw(MatrixNotSquare(L"The matrix must be a square."));
EigenValues.clear();
EigenVectors = Matrix<T>(m_unRowSize, m_unColSize);

// Find eigenvalues and eigenvectors
A.QRDecomposition(Q, R);
A = R * Q;
EigenVectors = Q;
for (uint64_t i=0; i<ITERATIONS; i++)
{
A.QRDecomposition(Q, R);
A = R * Q;
EigenVectors *= Q;
if (A.IsDiagonal()) break;
}
for (uint64_t i=0; i<A.GetRowSize(); i++)
{
EigenValues.push_back(A(i, i));
}
}
This code is running successfully and giving correct results. A sample output is attached.

#### Attached Files:

• ###### Code Output.png
File size:
1.9 KB
Views:
484
4. Dec 21, 2011

### Simon Bridge

Yep - the algorith does what I described.
When you asked for a numerical method, you failed to specify your constraints.

Most programming math libraries have a defined function to find the eigenvalues and eigenvectors of a matrix.

For gnu-octave, there is a built-in function:

Code (Text):
[b]Loadable Function: [V, LAMBDA] = eig (A)[/b]
The eigenvalues (and eigenvectors) of a matrix are computed in a
several step process which begins with a Hessenberg decomposition,
followed by a Schur decomposition, from which the eigenvalues are
apparent.  The eigenvectors, when desired, are computed by further
manipulations of the Schur decomposition.

The eigenvalues returned by `eig' are not ordered.

I used to use this to solve the schrodinger equation in 1D.

5. Dec 21, 2011

### D H

Staff Emeritus
Another approach is to use singular value decomposition. It's a beast to program, but because it is so very handy (so very, very, very handy), someone has inevitably done it for you already. Pick a language / tool and you will almost certainly find an SVD implementation -- even if you use a language such as Visual Basic and Cobol that is hardly ever used for scientific programming.

6. Dec 21, 2011

### AlephZero

In practice that may be a slightly recursive answer, because the a popular way to calculate the singular values and vectors is actually using QR (adapted to solve that specific problem, of course).

(But D.H. is right that the easy answer to most "how to" questions in numerical linear algebra starts "First find the SVD.")

If you know an eigenvalue, the simplest way to find the vector is to use inverse iteration (a.k.a. the inverse power method with shifts) which will converge in one interation because you already know exactly what shift to use.

On the other hand for finding a few eigenpairs of a large matrix the most popular methods iterate to find the eigenvectors, and the eigenvalues are then be found from the Rayleigh quotient $x^T A x / x^T x$. But a "goto" method like Lanczos iteration is also a beast to program so it works reliably in practice, even though the math looks deceptively simple.

7. Dec 21, 2011

### Simon Bridge

eg. http://www.mathworks.com/help/techdoc/ref/svd-singular-value-decomposition.html [Broken].

There are libs for major programming languages and math scripts which provide all these methods.

In the QR method - the eigenvectors are the product of the orthogonal transformation in each iteration. Which is what the Olver (example code post #3) paper does.

Last edited by a moderator: May 5, 2017