# What Are Eigenvectors and Eigenvalues?

Two important concepts in Linear Algebra are eigenvectors and eigenvalues for a linear transformation that is represented by a square matrix. Besides being useful in mathematics for solving systems of linear differential equations, diagonalizing matrices, and other applications, eigenvectors and eigenvalues are used in quantum mechanics and molecular physics, chemistry, geology, and many other scientific disciplines.

Some definitions:

An *eigenvector* for an n x n matrix A is a nonzero vector ##\vec{x}## such that ##A\vec{x} = \lambda \vec{x}##, for some scalar ##\lambda##.

An *eigenvalue* for a given eigenvector is a scalar ##\lambda## (usually a real or complex number) for which ##A\vec{x} = \lambda \vec{x}##. The Greek lower-case letter ##\lambda## (“lambda”) is traditionally used to represent the scalar in this definition.

The first definition above is deceptively simple. A point that might be helpful is that an eigenvector ##\vec{x}## for a matrix A represents a favored direction, in the sense that the product ##A\vec{x}## produces a result vector that is a scalar multiple of ##\vec{x}##. In other words, when a matrix A multiplies an eigenvector, the result is a vector in the same (or possibly opposite) direction, something that is not generally true of vectors that aren’t eigenvectors.

## Finding eigenvalues

Starting from the definition, we have

##A\vec{x} = \lambda \vec{x}##

##\Rightarrow A\vec{x} – \lambda \vec{x} = \vec{0}##

##\Rightarrow (A – \lambda I)\vec{x} = \vec{0}##

In the step above, I can’t subtract a scalar (##\lambda##) from a matrix, so I’m subtracting ##\lambda## times the identity matrix of appropriate size.

In the last equation above, one solution would be ##\vec{x} = \vec{0}##, but we don’t allow this possibility, because an eigenvector has to be nonzero. Another solution would be ##A – \lambda I = \vec{0}##, but because of the way matrix multiplication is defined, a matrix times a vector can result in zero even if neither the matrix nor the vector are zero. All we can be sure of is that the determinant of ##A – \lambda I## must be zero.

In other words, ##|A – \lambda I| = 0.##

To find the eigenvalues of a square matrix A, find the values of ##\lambda## for which ##|A – \lambda I| = 0.##

**Example 1: **Find the eigenvalues for the matrix ##A = \begin{bmatrix} 1 & 3 \\ -1 & 5\end{bmatrix}.##

In the work that follows, I’m assuming that you know how to evaluate a determinant.

**Solution**: ##|A – \lambda I|= \begin{vmatrix} 1 – \lambda & 3 \\ -1 & 5 – \lambda \end{vmatrix} = 0##

##\Rightarrow (1 – \lambda)(5 – \lambda) – (-3) = 0##

##\Rightarrow 5 – 6\lambda + \lambda^2 + 3 = 0##

##\Rightarrow \lambda^2 – 6\lambda + 8 = 0##

##\Rightarrow (\lambda – 4)(\lambda – 2) = 0##

##\Rightarrow \lambda = 4 \text{ or } \lambda = 2##

**∴** **The eigenvalues are 4 and 2.**

## Finding eigenvectors

After you have found the eigenvalues, you are now ready to find the eigenvector (or eigenvectors) for each eigenvalue.

To find the eigenvector (or eigenvectors) associated with a given eigenvalue, solve for ##\vec{x}## in the matrix equation ##(A – \lambda I)\vec{x} = \vec{0}##. This action must be performed for each eigenvalue.

**Example 2: **Find the eigenvectors for the matrix ##A = \begin{bmatrix} 1 & 3 \\ -1 & 5\end{bmatrix}.##

(This is the same matrix as in Example 1.)

#### Work for ##\lambda = 4##

##A – 4I= \begin{bmatrix} 1 – 4 & 3 \\ -1 & 5 – 4\end{bmatrix} = \begin{bmatrix} -3 & 3 \\ -1 & 1\end{bmatrix}##

To find an eigenvector associated with ##\lambda = 4##, we are going to solve the matrix equation ##(A – 4I)\vec{x} = \vec{0}## for ##\vec{x}##. Rather than write the matrix equation out as a system of equations, I’m going to take a shortcut, and use row reduction on the matrix ##A – 4I.## After row reduction, I’ll write the system of equations that are represented by the reduced matrix.

In the work shown here, I’m assuming that you are able to solve a system of equations in matrix form, using row operations to get an equivalent matrix in reduced row-echelon form. Using row operations on the last matrix above, we find that the matrix above is equivalent to ##\begin{bmatrix} 1 & -1 \\ 0 & 0\end{bmatrix}.##

The last matrix represents this system of equations:

##x_1 = x_2##

##x_2 = x_2##

We can write this as ##\vec{x} = \begin{bmatrix} x_1 \\ x_2\end{bmatrix} = x_2\begin{bmatrix} 1 \\ 1\end{bmatrix}##, where ##x_2## is a parameter.

**An eigenvector for ##\lambda = 4## is ##\begin{bmatrix} 1 \\ 1\end{bmatrix}.##**

This is not the only possible eigenvector for ##\lambda = 4##; any scalar multiple (except the zero multiple) will also be an eigenvector.

As a check, satisfy yourself that ##\begin{bmatrix} 1 & 3 \\ -1 & 5\end{bmatrix} \begin{bmatrix} 1 \\ 1 \end{bmatrix} = 4\begin{bmatrix} 1 \\ 1 \end{bmatrix}##, thus showing that ##A\vec{x} = \lambda \vec{x}## for our eigenvalue/eigenvector pair.

#### Work for ##\lambda = 2##

##A – 2I= \begin{bmatrix} 1 – 2 & 3 \\ -1 & 5 – 2\end{bmatrix} = \begin{bmatrix} -1 & 3 \\ -1 & 3\end{bmatrix}##

Using row operations to get the last matrix in reduced row-echelon form, we find that the last matrix above is equivalent to ##\begin{bmatrix} 1 & -3 \\ 0 & 0\end{bmatrix}.##

This matrix represents the following system of equations:

##x_1 = 3x_2##

##x_2 = x_2##

We can write this as ##\vec{x} = \begin{bmatrix} x_1 \\ x_2\end{bmatrix} = x_2\begin{bmatrix} 3 \\ 1\end{bmatrix}##, where ##x_2## is a parameter.

**An eigenvector for ##\lambda = 2## is ##\begin{bmatrix} 3 \\ 1\end{bmatrix}.##**

As a check, satisfy yourself that ##\begin{bmatrix} 1 & 3 \\ -1 & 5\end{bmatrix} \begin{bmatrix} 3 \\ 1 \end{bmatrix} = 2\begin{bmatrix} 3 \\ 1 \end{bmatrix}##.

For the final example, we’ll look at a 3 x 3 matrix.

**Example 3**: Find the eigenvalues and eigenvectors for the matrix ##A = \begin{bmatrix} 1 & 0 & -4 \\ 0 & 5 & 4 \\ -4 & 4 & 3\end{bmatrix}.##

Because this example deals with a 3 x 3 matrix instead of the 2 x 2 matrix of the previous examples, the work is a considerably longer. The solution I provide won’t show the level of detail of the previous examples. I leave it to readers of this article to flesh out the details I have omitted.

**Solution**:

(Part A – Finding the eigenvalues)

Set ##|A – \lambda I|## to 0 and solve for ##\lambda##.

##|A – \lambda I| = 0##

##\Rightarrow \begin{vmatrix} 1 – \lambda & 0 & -4 \\ 0 & 5 – \lambda & 4 \\ -4 & 4 & 3 – \lambda \end{vmatrix} = 0##

##\Rightarrow -\lambda^3 + 9\lambda^2 + 9\lambda – 81 = 0##

##\Rightarrow (\lambda – 9)(\lambda^2 – 9) = 0##

**∴ The eigenvalues are ##\lambda = 9##, ##\lambda = 3##, and ##\lambda = -3.##**

I’ve skipped a lot of steps above, so you should convince yourself by expanding the determinant and factoring the resulting third-degree polynomial, that the values shown are the correct ones.

(Part B – Finding the eigenvectors)

I’ll show an outline of the work for ##\lambda = 9##, but will just show the results for the other two eigenvalues, ##\lambda = 3## and ##\lambda = -3##.

#### Work for ##\lambda = 9##

If ##\lambda = 9##,

##\begin{bmatrix} 1 – \lambda & 0 & -4 \\ 0 & 5 – \lambda & 4 \\ -4 & 4 & 3 – \lambda \end{bmatrix} = \begin{bmatrix} 1 – 9 & 0 & -4 \\ 0 & 5 – 9 & 4 \\ -4 & 4 & 3 – 9\end{bmatrix} = \begin{bmatrix} -8 & 0 & -4 \\ 0 & -4 & 4 \\ -4 & 4 & -6\end{bmatrix}##

The last matrix on the right is equivalent to ##\begin{bmatrix} 2 & 0 & 1 \\ 0 & 1 & -1 \\ 2 & -2 & 3\end{bmatrix}.##

Using row operations to put this matrix in reduced row-echelon form, we arrive at this fully reduced matrix:

##\begin{bmatrix} 1 & 0 & \frac 1 2\\ 0 & 1 & -1 \\ 0 & 0 & 0\end{bmatrix}##

This matrix represents the following system of equations:

##x_1 = -\frac 1 2 x_3##

##x_2 = x_3##

##x_3 = x_3##

We can write this system in vector form, as

##\vec{x} = \begin{bmatrix} x_1 \\ x_2 \\ x_3 \end{bmatrix} = x_3\begin{bmatrix} -\frac 1 2 \\ 1 \\ 1\end{bmatrix}##, where ##x_3## is a parameter.

**An eigenvector for ##\lambda = 9## is ##\begin{bmatrix} -\frac 1 2 \\ 1 \\ 1\end{bmatrix}.##**

Any nonzero multiple of this eigenvector is also an eigenvector, so we could just as well have chosen ##\begin{bmatrix} -1 \\ 2\\ 2\end{bmatrix}## for the eigenvector.

As before, you should always check your work, by verifying that ##\begin{bmatrix} 1 & 0 & -4 \\ 0 & 5 & 4 \\ -4 & 4 & 3\end{bmatrix} \begin{bmatrix} -1 \\ 2\\ 2\end{bmatrix} = 9 \begin{bmatrix} -1 \\ 2\\ 2\end{bmatrix}.##

#### Results for ##\lambda = 3## and ##\lambda = -3##

Using the same procedure as above, I find that an eigenvector for ##\lambda = 3## is ##\begin{bmatrix} -2 \\ -2\\ 1\end{bmatrix}##, and that an eigenvector for ##\lambda = -3## is ##\begin{bmatrix} 1 \\ -\frac 1 2\\ 1\end{bmatrix}.## If you wish to avoid fractions, it’s convenient to choose ##\begin{bmatrix} 2 \\ -1\\ 2\end{bmatrix}## for an eigenvector for ##\lambda = -3.##

#### Summary for Example 2

For the matrix of this example, the eigenvalues are ##\lambda = 9##, ##\lambda = 3##, and ##\lambda = -3.## In the same order, a set of eigenvectors for these eigenvalues is ##\left\{\begin{bmatrix} -1 \\ 2\\ 2\end{bmatrix}, \begin{bmatrix} -2 \\ – 2\\ 1\end{bmatrix}, \begin{bmatrix} 2 \\ -1\\ 2\end{bmatrix}\right\}.##

Former college mathematics professor for 19 years; taught a variety of programming languages. Former technical writer for 15 years at a large software firm headquartered in Redmond, WA. Current associate faculty at a nearby community college, teaching classes in C++ and computer architecture/assembly language.

I enjoy traipsing around off-trail in Olympic National Park, as well as riding and tinkering with my four motorcycles.

Excellent information, Mark44!

I learned about eigenvalues and eigenvectors in quantum mechanics first, so I can't help but think of wavefunctions and energy levels of some Hamiltonian. Other cases are (to me) just different analogs to wave functions and energy levels. Nice article. I eventually took linear algebra and learned it the way you are presenting it, but I was a senior in college taking the course by correspondence.

So the red/blue arrows on the image are eigenvectors?

A good visual is here:https://en.wikipedia.org/wiki/File:Eigenvectors-extended.gif

i do not understand how det(A-lambda(I))=0since x is not a square matrix we cannot write det((A-lambda(I))*x)=det(A-lambda(I))*det(x)

Eigenvalues/vectors is something I've often wanted to learn more about, so I really appreciate the effort that went into writing this article Mark. The problem is that I feel like I've been shown a beautiful piece of abstract art with lots of carefully thought out splatters but the engineer in me cries out… "But what is it for?" :)"Here is an awesome tool that is very useful to a long list of disciplines. It's called a screwdriver. To make use of it you grasp it with your hand and turn it. The end." Nooooo! Don't stop there – I don't have any insight yet into why this tool is so useful, nor intuition into the types of problems I might encounter where I would be glad I had brought my trusty screwdriver with me.I would truly love to know these things, so I hope you will consider adding some additional exposition that offers insight into why eigenstuff is so handy.

"All we can be sure of is that the determinant of IA–λI must be zero". How do you arrive at this? Are you saying that \vec{x} cannot be \vec{0}. And A – \lambda may not be \vec{0}.=>A – \lambda does not have an inverse. =>det|A – \lambda| must be 0.Is that the reasoning?

Sorry my LaTex was all messed up. Here is what I meant to say…"All we can be sure of is that det(A–λ) must be zero". How do you arrive at this?Are you saying that ##\vec{x}## cannot be ##\vec{0}## and ##A – \lambda## may not be ##\vec{0}##.=>##A – \lambda## does not have an inverse.=>##det(A – \lambda)## must be 0. Is that the reasoning?

a good explanation ,but .( "In other words, when a matrix A multiplies an eigenvector, the result is a vector in the same (or possibly opposite) direction,") please explain the statement

excellent ,but please explain"when a matrix A multiplies an eigenvector, the result is a vector in the same (or possibly opposite) direction"