MHB Help to understand this eigenvector case

  • Thread starter Thread starter ognik
  • Start date Start date
  • Tags Tags
    Eigenvector
ognik
Messages
626
Reaction score
2
Did a practice problem finding eigenvalues &-vectors, ended with this row-reduced matrix: $ \begin{bmatrix}1&0&0\\0&1&0\\0&0&1\end{bmatrix}$; Solving to get the eigenvectors, I could get $x_1 = x_2 = x_3$, in which case the eigenspace would be the zero vector.

Instead the answer is $ \begin{bmatrix}0\\0\\1\end{bmatrix}$ I think its because I need to choose 1 independent variable(right word?) and a method is to set the last variable to some real t - $x_3=t$, then $x_1 = x_2 = 0t$ ...but above I found $x_3 = 0$? So I know to set $x_3$ to t - but it doesn't make sense to me in this particular case where there are no dependent variables?

(I can see that it is an identity matrix, which is also a Markov matrix, but I don't thank those are relevant?)
 
Physics news on Phys.org
It's not clear "what" matrix you row-reduced, nor why, so it's impossible to verify you did it correctly.

That said, finding eigenvectors usually is a two-step process:

Step 1-given the matrix $A$, calculate the determinant of $xI - A$ (this will be a polynomial in $x$).

Step 2 - for any root of that polynomial, say $\lambda$, solve the system of equations:

$(\lambda I - A)v = 0$.

This system will typically be "under-determined", since we know that $\lambda I - A$ is singular (why?).

Typically, one will get a solution of the form (in $3$ dimensions, for example): $(at,bt,ct) = t(a,b,c)$. Any non-zero value of $t$ gives an eigenvector. Often, one of $a,b,c$ is chosen to be $1$, and it is often possible to have all $3$ coordinates be chosen to be integers.
 
Hi - I'm sure it's correct - and it's not the first I've encountered, where solving $ (λI−A)v=0 $ could have all the variables = 0. (it is singular 'cos we found eigenvalues for which the characteristic poly = 0, in turn from det(λI−A)=0)

I know that the method is to choose one variable (in this case $v_3$) to be 1, so in this case we get t(0,0,1). But I'd like to understand why t(0,0,0) is not valid (because solving $ (λI−A)v=0 $ gives $v_1 = v_2 = v_3 = 0)?
 
Eigenvectors are always non-zero.
 
Deveno said:
Eigenvectors are always non-zero.

Hi, just came across a lemma that says 'an eigenspace is a subs-space - which I knew. But it then goes on to say

"Proof. An eigenspace must be nonempty — for one thing it contains the zero vector — and so we need only check closure"

Doesn't this suggest that the zero vector CAN be an eigenvector?
 
An eigenspace is composed of all eigenvectors belonging to a certain eigenvalue, AND the zero vector (which is NOT an eigenvector).
 
Thread 'How to define a vector field?'
Hello! In one book I saw that function ##V## of 3 variables ##V_x, V_y, V_z## (vector field in 3D) can be decomposed in a Taylor series without higher-order terms (partial derivative of second power and higher) at point ##(0,0,0)## such way: I think so: higher-order terms can be neglected because partial derivative of second power and higher are equal to 0. Is this true? And how to define vector field correctly for this case? (In the book I found nothing and my attempt was wrong...

Similar threads

Replies
33
Views
1K
  • · Replies 7 ·
Replies
7
Views
1K
  • · Replies 4 ·
Replies
4
Views
2K
  • · Replies 5 ·
Replies
5
Views
2K
  • · Replies 11 ·
Replies
11
Views
2K
  • · Replies 1 ·
Replies
1
Views
3K
  • · Replies 7 ·
Replies
7
Views
2K
Replies
14
Views
2K
  • · Replies 10 ·
Replies
10
Views
3K
Replies
27
Views
2K