Finding a basis of eigenvectors

Catchfire
Messages
30
Reaction score
0

Homework Statement


<br /> A = \left( \begin{array}{ccc}<br /> 2 &amp; 0 &amp; -1 \\<br /> 4 &amp; 1 &amp; -4 \\<br /> 2 &amp; 0 &amp; -1 \end{array} \right)<br />

Find the eigenvalues and corresponding eigenvectors that form a basis over R3

Homework Equations





The Attempt at a Solution



OK so I've found the characteristic polynomial: -λ(λ-1)2
so I know my eigenvalues are 0,1,1

Then to find the eigenvectors I sub the eigenvalues into the matrix A - λI

<br /> A - 0I = \left( \begin{array}{ccc}<br /> 2 &amp; 0 &amp; -1 \\<br /> 4 &amp; 1 &amp; -4 \\<br /> 2 &amp; 0 &amp; -1 \end{array} \right)<br />

then I solve:
2x -z = 0
4x + y -4z = 0

z = 2x
y = 4x

so my eigenvector is (1,4,2)

<br /> A - 1I = \left( \begin{array}{ccc}<br /> 1 &amp; 0 &amp; -1 \\<br /> 4 &amp; 0 &amp; -4 \\<br /> 2 &amp; 0 &amp; -2 \end{array} \right)<br />

x = z

so the eigenvector is (1,0,1)

Now I'm out of new eigenvalues to substitute. How do I find the last eigenvector?
 
Physics news on Phys.org
The rank of the second matrix is 1, so there are two independent eigenvectors to get from it.
 
So are you saying dim(A) = rank(A) + nullity(A)
and since dim(A-I) = 3 and rank(A-I) = 1, then nullity(A-I) = 2, implying I need two vectors from A-I.Ahh I see where I messed up, since there is no y values I can write y = a for any a in R.
So really my eigenvectors are (1,a,1) and (0,b,0).

Is all of this correct?
 
Catchfire said:
So are you saying dim(A) = rank(A) + nullity(A)
and since dim(A-I) = 3 and rank(A-I) = 1, then nullity(A-I) = 2, implying I need two vectors from A-I.Ahh I see where I messed up, since there is no y values I can write y = a for any a in R.
So really my eigenvectors are (1,a,1) and (0,b,0).

Is all of this correct?

What is preventing you from substituting these vectors into the equation to check if they work?

RGV
 
Last edited:
Nothing, but I already know the answers to the problem. I just wanted to make sure my reasoning was correct and that I understood what theorem voko was citing.

Can you lend a hand with any of that? It would much appreciated if you could.
 
I do not remember whether that theorem has any particular name, but it states that the dimension of the solution space of a homogeneous linear system is the dimension of the system minus its rank. And you got that correctly.
 
Catchfire said:
Nothing, but I already know the answers to the problem. I just wanted to make sure my reasoning was correct and that I understood what theorem voko was citing.

Can you lend a hand with any of that? It would much appreciated if you could.

But YOU yourself already cited the result in your post!

In the present case the 3x3 matrix is so simple that one can spot immediately that it has rank 1. In more complex cases (say a 5x5 or a 10x10 matrix), finding the rank involves essentially the same algorithm that one uses to solve the system Ax=0 (Gaussian elimination/row-reduction), so you determine the dimension of the null space (and find a basis for it) at the same time that you find the rank.

RGV
 
Ray Vickson said:
In more complex cases (say a 5x5 or a 10x10 matrix), finding the rank involves essentially the same algorithm that one uses to solve the system Ax=0 (Gaussian elimination/row-reduction), so you determine the dimension of the null space (and find a basis for it) at the same time that you find the rank.

Well, in the case of eigenvectors one knows the rank even before that.
 
voko said:
Well, in the case of eigenvectors one knows the rank even before that.

No. The algebraic and geometric miltiplicities may differ, in which case the dimension of the null space can be less than the multiplicity. That is why the Jordan form is sometimes non-diagonal.

RGV
 
  • #10
Thanks for the responses, I appreciate the help.
 

Similar threads

Replies
7
Views
2K
Replies
2
Views
1K
Replies
3
Views
995
Replies
2
Views
1K
Replies
8
Views
2K
Replies
12
Views
2K
Replies
6
Views
2K
Replies
4
Views
1K
Back
Top