Repeated Eigen Values and their Eigen Vectors

blondii
Messages
31
Reaction score
0
I have a matrix and can't seem to get my head around finding all the eigen vectors.

The matrix is A:

(1 0 0 0
1 0 0 1
0 1 0 0
0 0 1 0)

I got the eigen values as:
λ1 = 1, λ2 = λ3 = λ4 = 0

For λ1:

The eigen vector V1 is (0, 1, 1, 1).

For λ2 -> λ4:

The only eigen vector I could make out is: V2 (0, 0, 0, 0).

To calculate the remaining eigen vectors I solved for P using the formula (A-λI)P = K
Where K is an eigenvector of the matrix A associated with the eigenvalue (In this case V2). But substituting λ2 and V2 into the equation will only lead back again to the same equation which I don't think is correct. Is there a better method I can follow or is there something I am not doing correctly?

Thanks
 
Physics news on Phys.org
One of your eigenvalues is 3-fold degenerate.
This means that the system of equations is not independent.
To get three distinct eigenvectors you need to add an additional contraint that they be orthogonal.

http://mathworld.wolfram.com/Eigenvalue.html
... scroll down to "degenerate".

Hmmm ... I'm getting:
Code:
octave:2> A
A =

   1   0   0   0
   1   0   0   1
   0   1   0   0
   0   0   1   0

octave:3> l=eig(A)
l =

  -0.50000 + 0.86603i
  -0.50000 - 0.86603i
   1.00000 + 0.00000i
   1.00000 + 0.00000i
 
Last edited:
Thanks Simon I will have a look into that.
 
blondii said:
I have a matrix and can't seem to get my head around finding all the eigen vectors.

The matrix is A:

(1 0 0 0
1 0 0 1
0 1 0 0
0 0 1 0)

I got the eigen values as:
λ1 = 1, λ2 = λ3 = λ4 = 0

For λ1:

The eigen vector V1 is (0, 1, 1, 1).

For λ2 -> λ4:

The only eigen vector I could make out is: V2 (0, 0, 0, 0).
No. The very definition of "eigenvalue" is "There exist a non-zero vector, v, such that Av= \lambda v so there MUST be a non-zero eigenvector (Many people will not count the zero vector as an eigenvector. I prefer to so that "the set of all eigenvectors corresponding to a given eigenvalue" will be a subspace. In any case, there must exist non-zero eigenvectors.)
Any eigenvector corresponding to eigenvalue 0 must satisfy
\begin{bmatrix}1 & 0 & 0 & 0 \\ 1 & 0 & 0 & 1\\ 0 & 1 & 0 & 0 \\ 0 & 0 & 1 & 0 \end{bmatrix}\begin{bmatrix}x \\ y \\ z \\ u\end{bmatrix}= \begin{bmatrix}x \\ x+ u\\ y \\ z\end{bmatrix}= 0\begin{bmatrix}x \\ y \\ z \\ u\end{bmatrix}

which gives the four equations x= 0, x+ u= 0, y= 0, z= 0. Yes, that has x= y= z= u= 0 as the only root. And what that means is there does NOT exist such a non-zero vector and 0 is NOT an eigenvalue!

When I calculate the characteristic equation, I get (1- \lambda)(1- \lambda^3)= 0 which does NOT have 0 as a root.

To calculate the remaining eigen vectors I solved for P using the formula (A-λI)P = K
Those are NOT, strictly speaking, "eigenvectors". They are "generalized eigenvectors".

Where K is an eigenvector of the matrix A associated with the eigenvalue (In this case V2). But substituting λ2 and V2 into the equation will only lead back again to the same equation which I don't think is correct. Is there a better method I can follow or is there something I am not doing correctly?

Thanks
0 is NOT an eigenvalue.
 
Last edited by a moderator:
Blondii may not have realized that you can have complex roots ;)
 
HallsofIvy said:
When I calculate the characteristic equation, I get (1- \lambda)(1- \lambda^3)= 0 which does NOT have 0 as a root.

Thanks HallsofIvy I realize now where my mistake was. I was not calculating the determinant of the characteristic equation properly and always kept stoping after the first phase (I guess that's what you get for staying awake too long).

Thanks guys, your input was much appreciated.
 
So after calculating would my eigen values be:

λ1 = 1
λ2 = 1
λ3 = (\frac{1}{2})(-1 + \sqrt{3}i)
λ4 = (\frac{1}{2})(-1 - \sqrt{3}i)

Is this correct now?
 
blondii said:
So after calculating would my eigen values be:

λ1 = 1
λ2 = 1
λ3 = (\frac{1}{2})(-1 + \sqrt{3}i)
λ4 = (\frac{1}{2})(-1 - \sqrt{3}i)

Is this correct now?

With these new corrections there will be repeated roots of 1 which will also generate linearly independent eigenvectors one of which is the zero vector [0,0,0,0].
 
blondii said:
With these new corrections there will be repeated roots of 1 which will also generate linearly independent eigenvectors one of which is the zero vector [0,0,0,0].
No, the zero vector is NOT "independent" of any vectors! You should NEVER find the 0 vector as an eigenvector. Stop that!

With eigenvalue 1, doing what I did before gives the four equations:
x= x
x+ u= y
z= y
u= z

The last two equations tell us y= z= u. Replacing u by y in the second equation, x+ y= y we get x= 0. That is, < x, y, z, u>= <0, y, y, y>= y<0, 1, 1, 1>. <0, 1, 1, 1> is an eigenvector and, because this only spans a 1 dimensional space, you need to find a "generalized eigenvalue" by solving (A- I)v= <0, 1, 1, 1>.
 
  • #10
HallsofIvy said:
you need to find a "generalized eigenvalue" by solving (A- I)v= <0, 1, 1, 1>.

I understand the zero vector can't be used, but after solving the equation earlier that seems to be the only possible solution because all other vectors that will eventuate out of solving that equation still won't be linearly independent to <0,1,1,1>.
I tried inputing the matrix into Mathematica to calculate its eigenvalues and vectors and it still returned the zero eigen vector along with the two complex and real eigen vectors.

Am not really sure now, how to go about this? If the zero eigen vector can't be used and even trying to calculate the general eigen vector won't return a favorable outcome, is their some higher math that needs to be incorporated?
 
  • #11
blondii said:
The only eigen vector I could make out is: V2 (0, 0, 0, 0).

Ok after considering the equation again I came up with the eigen vector <2,1,1,1>.
Would this be acceptable since it seems to be linearly independent and sill satisfies the equation (A- I)v= <0, 1, 1, 1>?
 
Back
Top