What Is the Origin of the Unexpected Vector in the Eigenbasis Calculation?

  • Thread starter Thread starter MikeDietrich
  • Start date Start date
  • Tags Tags
    Matrix
Click For Summary
The discussion centers on finding the eigenvalues and eigenbasis of the matrix A = [1 0 0; -5 0 2; 0 0 1]. After calculating the determinant and identifying eigenvalues, the user initially concludes that there is no eigenbasis due to fewer unique eigenvalues than the matrix size. However, further analysis reveals that the eigenspace for the eigenvalue 1 is two-dimensional, leading to the identification of two eigenvectors. The confusion arises regarding the origin of the eigenvector [0, 2, 1], which is confirmed to be associated with the eigenvalue 1, highlighting the distinction between algebraic and geometric multiplicities. The conversation concludes with the understanding that the matrix may not be diagonalizable but can be expressed in Jordan normal form.
MikeDietrich
Messages
31
Reaction score
0

Homework Statement


Find all real eigenvalues, the basis for each eigenspace, and an eigenbasis.

A = [ 1 0 0 ## -5 0 2 ## 0 0 1] Note: ## starts a new row



Homework Equations



det A = 0
E = N ($I-A) where $ = eigenvector


The Attempt at a Solution



So, after calculating detA = 0 I determined $_1=1, $_2=1, and $_3=0 where $ = eigenvector.

I then calculated E_1 = [1 1/5 2/5 ## 0 0 0 ## 0 0 0] [ v_1 ## v_2 ## v_3] = [0 ## 0 ## 0] therefore, E_1 = span [1 ## -5 ## 0]

E_0 = [1 0 0 ## 0 0 1 ## 0 0 0][ v_1 ## v_2 ## v_3] = [0 ## 0 ## 0] therefore, E_0 = span [0 ## 1 ## 0]

At this point I assumed there is no eigenbasis since there are less unique eigenvalues then 3 (3 x 3 matrix). However, when I plug the matrix into an online eigenvalue calculator I get:
Eigenbasis: [0 ## 1 ## 0], [1 ## -5 ## 0], [0 ## 2 ## 1].

Where did the [0 ## 2 ## 1] come from?

Thanks!
 
Physics news on Phys.org
MikeDietrich said:

Homework Statement


Find all real eigenvalues, the basis for each eigenspace, and an eigenbasis.

A = [ 1 0 0 ## -5 0 2 ## 0 0 1] Note: ## starts a new row



Homework Equations



det A = 0
E = N ($I-A) where $ = eigenvector


The Attempt at a Solution



So, after calculating detA = 0 I determined $_1=1, $_2=1, and $_3=0 where $ = eigenvector.
No, these are eigenvalues.
MikeDietrich said:
I then calculated E_1 = [1 1/5 2/5 ## 0 0 0 ## 0 0 0] [ v_1 ## v_2 ## v_3] = [0 ## 0 ## 0] therefore, E_1 = span [1 ## -5 ## 0]

E_0 = [1 0 0 ## 0 0 1 ## 0 0 0][ v_1 ## v_2 ## v_3] = [0 ## 0 ## 0] therefore, E_0 = span [0 ## 1 ## 0]

At this point I assumed there is no eigenbasis since there are less unique eigenvalues then 3 (3 x 3 matrix). However, when I plug the matrix into an online eigenvalue calculator I get:
Eigenbasis: [0 ## 1 ## 0], [1 ## -5 ## 0], [0 ## 2 ## 1].

Where did the [0 ## 2 ## 1] come from?
It must be one of the eigenvectors associated with the eigenvalue 1. The eigenvector <0, 1, 0> is associated with the eigenvalue 0.

The eigenspace for the eigenvalue 1 is two-dimensional, so there are two eigenvectors. The ones I get are <-1, 5, 0> and <2, 0, 5>. Other pairs are possible.
 
Thanks Mark... you are correct... I did mean eigenvalue (not eigenvector). I can now see how to get <2, 0, 5> but how did you know the eigenspace was two dimensional for the eigenvalue of 1? For example, the matrix A = [1 1 0 ## 0 1 1 ## 0 0 1] has an eigenvalue equal to 1 and there is only one eigenvector (at least the only one I have is <1, 0, 0>).
 
Last edited:
Wait. I think I have it. It is 2D because that is the multiplicity of the eigenvalue, correct?
 
Look at the matrix A - 1I, which is
[0 0 0]
[-5 -1 2]
[0 0 0]

(We're trying to solve (A - 1I)x = 0, for x.)

If you swap the 1st and 2nd rows and row reduce, you get
[1 1/5 -2/5]
[0 0 0]
[0 0 0]

The first row says
x1 + (1/5)x2 - (2/5)x3 = 0
or equivalently,
x1 = (-1/5)x2 + (2/5)x3
x2 and x3 are arbitrary, so I can write the system as

x1 = (-1/5)x2 + (2/5)x3
x2 = ...x2
x3 = ......x3

Since there are two free variables, x2 and x3, the dimension of the eigenspace for \lambda = 1 is 2. The vectors <-1/5, 1, 0> and <2/5, 0, 1> form a basis for this space, and you might notice that I picked these coordinates off the equations just above.

Any multiples of these vectors also work, so the basis could also be the vectors <-1, 5, 0> and <2, 0, 5>.

You can check that these vectors work by showing that A*u1 = 1*u1 and A*u2 = 1*u2, where u1 and u2 are the vectors above (either pair).
 
There is a difference between "algebraic multiplicity" and "geometric multiplicity". The first, "algebraic multiplicity" is the multiplicity of the eigenvalue as a root of the characteristic equation. The second, "geometric multiplicity" is the dimension of the subspace of eigenvectors corresponding to a specific eigenvalue. They are NOT necessarily the same- if an eigenvalue has "algebraic multiplicity" n, then it may have "geometric multiplicty" any integer from 1 to n.
For example, the matrices
\begin{bmatrix}1 &amp; 0 &amp; 0 \\ 0 &amp; 1 &amp; 0\\0 &amp; 0 &amp; 1\end{bmatrix}
\begin{bmatrix}1 &amp; 1 &amp; 0 \\ 0 &amp; 1 &amp; 0\\0 &amp; 0 &amp; 1\end{bmatrix}
and
\begin{bmatrix}1 &amp; 1 &amp; 0 \\ 0 &amp; 1 &amp; 1\\0 &amp; 0 &amp; 1\end{bmatrix}
all have [math](\lambda- 1)^3= 0[/math] as characteristic equation so eigenvalue 1 with algebraic multiplicity 3. The first has all of R3 as "eigenspace", the second has a two dimensional eigenspace and the third a one dimensional eigenspace.

Personally, I prefer to find eigenvectors directly from the defintions: Av= \lambda v.

With
A= \begin{bmatrix} 1 &amp; 0 &amp; 0 \\ -5 &amp; 0 &amp; 2 \\ 0 &amp; 0 &amp; 1 \end{bmatrix}
we must have
Av= \begin{bmatrix} 1 &amp; 0 &amp; 0 \\ -5 &amp; 0 &amp; 2 \\ 0 &amp; 0 &amp; 1 \end{bmatrix}\begin{bmatrix}x \\ y \\ z\end{bmatrix}= \begin{bmatrix}x \\ -5x+ 2z \\ z\end{bmatrix}= \begin{bmatrix}x \\ y \\ z\end{bmatrix}

That is equivalent to the three equations x= x, -5x+ 2z= y, and z= z. The first and last tell us nothing, of course, so it reduces to -5x+ 2z= y. Then <x, y, z>= <x, -5x+ 2z, z>= <x, -5x, 0>+ <0, 2z, z>= x<1, -5, 0>+ z<0, 2, 1>.

Yes, the eigenspace corresponding to eigenvalue 1 is spanned by <1, -5, 0> and <0, 2, 1> and so has dimension 2. Either there is another eigenvalue or A cannot be diagonalized. Assuming there is no other eigenvalue, then it can be put into the "Jordan normal form"
\begin{bmatrix}1 &amp; 0 &amp; 0 \\ 0 &amp; 1 &amp; 1 \\0 &amp; 0 &amp; 1\end{bmatrix}

To do that you need to use a matrix, P, whose first two columns are the eigenvectors found and whose third column is a "generalized eigenvector", a vector v, such that (A- I)v is an eigenvector.
 
Question: A clock's minute hand has length 4 and its hour hand has length 3. What is the distance between the tips at the moment when it is increasing most rapidly?(Putnam Exam Question) Answer: Making assumption that both the hands moves at constant angular velocities, the answer is ## \sqrt{7} .## But don't you think this assumption is somewhat doubtful and wrong?

Similar threads

  • · Replies 2 ·
Replies
2
Views
2K
  • · Replies 2 ·
Replies
2
Views
1K
Replies
2
Views
2K
  • · Replies 7 ·
Replies
7
Views
2K
  • · Replies 2 ·
Replies
2
Views
2K
  • · Replies 3 ·
Replies
3
Views
2K
  • · Replies 14 ·
Replies
14
Views
3K
  • · Replies 6 ·
Replies
6
Views
1K
  • · Replies 2 ·
Replies
2
Views
1K
Replies
2
Views
2K