Finding the eigenvectors of a matrix A

AI Thread Summary
The discussion focuses on finding the eigenvectors of the matrix A, which has eigenvalues 2, -2, and 1. The user successfully calculated eigenvectors v_1 and v_3 but struggled with v_2, initially misinterpreting the row-reduced form of the matrix. They clarified that the zero row implies that a can take any real value, leading to a non-zero eigenvector. The final eigenvectors corresponding to each eigenvalue are expressed in terms of a parameter x, illustrating the existence of a subspace of eigenvectors for each eigenvalue. The conversation emphasizes the importance of understanding the definitions and properties of eigenvalues and eigenvectors in linear algebra.
TheSodesa
Messages
224
Reaction score
7

Homework Statement


<br /> A = \begin{bmatrix}<br /> 2 &amp; 1 &amp; 0\\<br /> 0&amp; -2 &amp; 1\\<br /> 0 &amp; 0 &amp; 1<br /> \end{bmatrix}<br />

Homework Equations

The Attempt at a Solution



The spectrum of A is \sigma (A) = { \lambda _1, \lambda _2, \lambda _3 } = {2, -2, 1 }

I was able to calculate vectors v_1 and v_3 correctly out of the vectors on the following page:
http://www.wolframalpha.com/input/?i=eigenvectors+of+{{2,1,0},{0,-2,1},{0,0,1}}

However, v_2 is giving me a headache. Using \lambda _1 to solve A - \lambda _1 I_3 gives me the matrix

<br /> \begin{bmatrix}<br /> \stackrel{a}{0} &amp; \stackrel{b}{1} &amp; \stackrel{c}{0}\\<br /> 0 &amp; -4 &amp; 1\\<br /> 0 &amp; 0 &amp; -1<br /> \end{bmatrix}<br /> <br /> \stackrel{rref}{=}<br /> <br /> \begin{bmatrix}<br /> \stackrel{a}{0} &amp; \stackrel{b}{1} &amp; \stackrel{c}{0}\\<br /> 0 &amp; 0 &amp; 1\\<br /> 0 &amp; 0 &amp; 0<br /> \end{bmatrix}<br />

In my head this would produce a zero eigenvector since

\begin{cases} b = 0 \\ c = 0 \\ (a = ?) \end{cases}

This is of course nonsense. I'm probably interpreting the row-reduced matrix wrong, but what is it exactly that I'm not understanding? Does it have something to do with the fact that on every row a = 0?
 
Physics news on Phys.org
To maybe answer my own question, (someone correct me if I'm wrong):

If each row in the above matrix represents an equation, I could theoretically make a = s, s \in \mathbb{R}, since we're solving for the null space, meaning that since a is always zero, it is equal to each the components of a zero vector, and we could add aything to either side of the equation and it would still hold, transforming the system of equations into:

<br /> \begin{cases}<br /> a = s\\<br /> b=0\\<br /> c=0<br /> \end{cases}<br />

Is this the solution?
 
a can then be any value you could choose 1 to normalize.
 
  • Like
Likes TheSodesa
jk22 said:
a can then be any value you could choose 1 to normalize.

Got it. Thank you for confirming my suspicions.
 
l would prefer to use the basic definitions of "eigenvalues" and "eigenvectors". Saying that "2" is an eigenvalue means that there exist a non-zero vector (in fact an entire subspace of them), (x, y, z), such that
\begin{bmatrix} 2 &amp; 1 &amp; 0 \\ 0 &amp; -2 &amp; 1 \\ 0 &amp; 0 &amp; 1 \end{bmatrix}\begin{bmatrix}x \\y \\ z \end{bmatrix}= \begin{bmatrix}2x+ y \\ -2y+ z \\ z \end{bmatrix}= 2\begin{bmatrix}x \\ y \\ z \end{bmatrix}
which gives the three equations 2x+ y= 2x, -2y+ z= 2y, z= 2z. The first equation is the same as y= 0 and the last z= 0. But there is no condition on x. Any vector of the form (x, 0, 0)= x(1, 0, 0) is an eigenvector corresponding to eigenvalue 2.

Similarly, the eigenvectors corresponding to eigenvalue -2 must satisfy 2x+ y= -2x, -2y+ z= -2y, z= -2z. We get z= 0 from the last equation but see that, then, any y satisfies the second and the first says y= -4x. So (x, -4x, 0)= x(1, -4, 0) is an eigenvector corresponding to eigenvalue -2.

Finally, eigenvectors corresponding to eigenvalue 1 must satisfy 2x+ y= x, -2y+ z= y, z= z. The last equation is always true, the second says z= 3y, and the first y= -x so that z= 3y= -3x. An eigenvector corresponding to eigenvalue 1 is (x, -x, -3x)= x(1, -1, -3).
 
Back
Top