# Finding the eigenvectors of a matrix A

1. Nov 27, 2015

### TheSodesa

1. The problem statement, all variables and given/known data
$$A = \begin{bmatrix} 2 & 1 & 0\\ 0& -2 & 1\\ 0 & 0 & 1 \end{bmatrix}$$

2. Relevant equations

3. The attempt at a solution

The spectrum of A is $\sigma (A) = { \lambda _1, \lambda _2, \lambda _3 } = {2, -2, 1 }$

I was able to calculate vectors $v_1$ and $v_3$ correctly out of the vectors on the following page:
http://www.wolframalpha.com/input/?i=eigenvectors+of+{{2,1,0},{0,-2,1},{0,0,1}}

However, $v_2$ is giving me a headache. Using $\lambda _1$ to solve $A - \lambda _1 I_3$ gives me the matrix

$$\begin{bmatrix} \stackrel{a}{0} & \stackrel{b}{1} & \stackrel{c}{0}\\ 0 & -4 & 1\\ 0 & 0 & -1 \end{bmatrix} \stackrel{rref}{=} \begin{bmatrix} \stackrel{a}{0} & \stackrel{b}{1} & \stackrel{c}{0}\\ 0 & 0 & 1\\ 0 & 0 & 0 \end{bmatrix}$$

In my head this would produce a zero eigenvector since

$$\begin{cases} b = 0 \\ c = 0 \\ (a = ?) \end{cases}$$

This is of course nonsense. I'm probably interpreting the row-reduced matrix wrong, but what is it exactly that I'm not understanding? Does it have something to do with the fact that on every row a = 0?

2. Nov 27, 2015

### TheSodesa

To maybe answer my own question, (someone correct me if I'm wrong):

If each row in the above matrix represents an equation, I could theoretically make $a = s, s \in \mathbb{R}$, since we're solving for the null space, meaning that since $a$ is always zero, it is equal to each the components of a zero vector, and we could add aything to either side of the equation and it would still hold, transforming the system of equations into:

$$\begin{cases} a = s\\ b=0\\ c=0 \end{cases}$$

Is this the solution?

3. Nov 27, 2015

### jk22

a can then be any value you could choose 1 to normalize.

4. Nov 27, 2015

### TheSodesa

Got it. Thank you for confirming my suspicions.

5. Nov 30, 2015

### HallsofIvy

Staff Emeritus
l would prefer to use the basic definitions of "eigenvalues" and "eigenvectors". Saying that "2" is an eigenvalue means that there exist a non-zero vector (in fact an entire subspace of them), (x, y, z), such that
$$\begin{bmatrix} 2 & 1 & 0 \\ 0 & -2 & 1 \\ 0 & 0 & 1 \end{bmatrix}\begin{bmatrix}x \\y \\ z \end{bmatrix}= \begin{bmatrix}2x+ y \\ -2y+ z \\ z \end{bmatrix}= 2\begin{bmatrix}x \\ y \\ z \end{bmatrix}$$
which gives the three equations 2x+ y= 2x, -2y+ z= 2y, z= 2z. The first equation is the same as y= 0 and the last z= 0. But there is no condition on x. Any vector of the form (x, 0, 0)= x(1, 0, 0) is an eigenvector corresponding to eigenvalue 2.

Similarly, the eigenvectors corresponding to eigenvalue -2 must satisfy 2x+ y= -2x, -2y+ z= -2y, z= -2z. We get z= 0 from the last equation but see that, then, any y satisfies the second and the first says y= -4x. So (x, -4x, 0)= x(1, -4, 0) is an eigenvector corresponding to eigenvalue -2.

Finally, eigenvectors corresponding to eigenvalue 1 must satisfy 2x+ y= x, -2y+ z= y, z= z. The last equation is always true, the second says z= 3y, and the first y= -x so that z= 3y= -3x. An eigenvector corresponding to eigenvalue 1 is (x, -x, -3x)= x(1, -1, -3).