Am I Making an Obvious Mistake with These Eigenvectors?

hnicholls
Messages
49
Reaction score
1
I am evaluating the following 2 x 2 matrix:

|2 0 |
|0 3 |

with eigenvalues 2 and 3.

If I use 2 and calculate the eigenvector:

R - λI =

|2-λ 0 |
|0 3-λ |

R - λI =

|0 0 |
|0 1 |

|0 0 ||a| =
|0 1 ||b|

|0|
|0|


a = 0 and b = 1

So eigenvector of eigenvalue 2 is

|0|
|1|


but it seems that is ought to be

|1|
|0|

Am I making an obvious mistake?
 
Physics news on Phys.org
hnicholls said:
|0 0 ||a| =
|0 1 ||b|

|0|
|0|


a = 0 and b = 1
This conclusion is what you need to rethink.
 
From
\begin{bmatrix}0 & 0 \\ 0 & 1\end{bmatrix}\begin{bmatrix}a \\ b\end{bmatrix}= \begin{bmatrix}0 \\ 0 \end{bmatrix}
we get
\begin{bmatrix}0(a) \\ 1(b)\end{bmatrix}= \begin{bmatrix}0 \\ b\end{bmatrix}= \begin{bmatrix}0 \\ 0 \end{bmatrix}

That does NOT say "a=0 and b= 1".
 
The matrix you are considering is already diagonal. This means that the basis:

<br /> \left[\begin{array}{c}<br /> 1 \\<br /> <br /> 0<br /> \end{array}\right], \left[\begin{array}{c}<br /> 0 \\<br /> <br /> 1<br /> \end{array}\right]<br />

are already eigenvectors of the matrix corresponding to the eignevalues in the upper left and lower right corners, respectively.

Also:

hnicholls said:
|0 0 ||a| =
|0 1 ||b|

|0|
|0|

a = 0 and b = 1

is wrong. If you write down the equations corresponding to that matrix equality you will have:

<br /> \left\{\begin{array}{lcl}<br /> 0 &amp; = &amp; 0 \\<br /> <br /> b &amp; = &amp; 0<br /> \end{array}\right.<br />

As you can see, this puts a constraint on b and not a.
 
Thank you all. These responses were very helpful.

I now have this questions: if there are no constraints on the value for a, how can we determine that the value for a is 1 for this diagonal matrix?
 
You can't. If \mathbf{x} is an eigenvector of a matrix, then so is c \mathbf{x} for any c \neq 0. It is sometimes conventional to impose the normalization condition:

<br /> \mathbf{x}^{\dagger} \cdot \mathbf{x} = 1<br />

This fixes the magnitude of a, but not its phase (if we consider it as a complex number) or sign (if we consider it a real one). It is conventional to have `most' of the components with positive sign. That is why we choose the phase to be zero.
 
I understand that any scalar multiple of the basis eigenvector will also be an eigenvector, but what I am still confused by is that for matrices that are not diagonal we use the characteristic equation to compute the eigenvalues and then find the eigenvectors by calculating a matrix equation which represents a system of two linear equations, which in turn reduces to a single linear equation which is solved (in my example) for a and b.

Does this procedure not work if we have a diagonal matrix?

In other words, do diagonal matrices, where the eigenvectors are basis vectors (of the standard basis) not allow for a computation of their eigenvectors using the technique we would use for a matrix which is not diagonalized?
 
hnicholls said:
I understand that any scalar multiple of the basis eigenvector will also be an eigenvector, but what I am still confused by is that for matrices that are not diagonal we use the characteristic equation to compute the eigenvalues and then find the eigenvectors by calculating a matrix equation which represents a system of two linear equations, which in turn reduces to a single linear equation which is solved (in my example) for a and b.

Does this procedure not work if we have a diagonal matrix?

In other words, do diagonal matrices, where the eigenvectors are basis vectors (of the standard basis) not allow for a computation of their eigenvectors using the technique we would use for a matrix which is not diagonalized?

The eigenbasis of a matrix is not uniquely determined. Do you know what the term rank of a linear system means?
 
Yes. The maximum number of linearly independent rows or columns in a given matrix.
 
  • #10
Well, when you substitute a particular eigenvalue in the characteristic equation, the rank of the system becomes lower than the dimensionality of the matrix and the difference of these two numbers is equal to the dimensionality of the subspace spanned by the corresponding eigenvectors. Even if the corresponding subspaces are one-dimensional, a.k.a. the eigenvalues are non-degenerate, the norm is not determined.
 
  • #11
I understand your point. But this would be true of any operator matrix, not only true of diagonal operator matrices, right?
 
  • #12
For diagonal matrices, the standard basis is already the eigenbasis.
 
  • #13
hnicholls said:
I understand that any scalar multiple of the basis eigenvector will also be an eigenvector, but what I am still confused by is that for matrices that are not diagonal we use the characteristic equation to compute the eigenvalues and then find the eigenvectors by calculating a matrix equation which represents a system of two linear equations, which in turn reduces to a single linear equation which is solved (in my example) for a and b.

Does this procedure not work if we have a diagonal matrix?

In other words, do diagonal matrices, where the eigenvectors are basis vectors (of the standard basis) not allow for a computation of their eigenvectors using the technique we would use for a matrix which is not diagonalized?
You were already shown, in the first few responses, how that method does work when the matrix is diagonal! You just used it incorrectly. If you had done the same thing with a non-diagonal matrix you would have gotten the wrong eigenvectors for it.

The non-diagonal matrix
\begin{bmatrix}-1 &amp; -2 \\ -6 &amp; 6 \end{bmatrix}

also has eigenvalues 2 and 3. To find an eigenvector corresponding to eigenvalue 2, row reduce
\begin{bmatrix}-1-2 &amp; -2 \\ -6 &amp; 6- 2\end{bmatrix}= \begin{bmatrix} -3 &amp; -2 \\ -6 &amp; 4\end{bmatrix}

Row reduction of that immediately leads to
\begin{bmatrix}1 &amp; \frac{2}{3}\\ 0 &amp; 0 \end{bmatrix}\begin{bmatrix}a \\ b\end{bmatrix}= \begin{bmatrix}0 \\ 0\end{bmatrix}

If you did the same thing as before, you would now conclude that b= 0 which is incorrect. What is true is that 0b= 0 so that b can be anything. You seem to be under the impression that there is one "correct" eigenvector corresponding to a given eigenvalue. As you have been told here, there are always an infinite number of eigenvectors corresponding to any eigenvalue.
 
Last edited by a moderator:
  • #14
Thank you. I think I am finally understanding this.

One last question. For a non-diagonal matrix:


|4 2 |
|3 -1 |

with eigenvalues 5 and -2.

If I use 5 and calculate the eigenvector:

R - λI =

|4-λ 2 |
|3 -1-λ |

R - λI =

|-1 2 |
| 3 -6 |

|-1 2 ||a| = |0|
|3 -6 ||b| = |0|

a = 2b
3a = 6b


a = 2 and b = 1

So eigenvector of eigenvalue 5 is

|2|
|1|

As there are an infinite number of eigenvectors corresponding to any eigenvalue, what would be another eigenvector that corresponds to this eigenvalue? What are the conditions for these infinite number of eigenvectors corresponding to this eigenvalue?
 
  • #15
These:

<br /> \left(\begin{array}{c}{2 \lambda \\ \lambda<br /> \end{array}\right)<br />
 
  • #16
Great. So using any scalar multiple of the basis eigenvector, we could have for this eigenvalue 5:

|4| or |8|
|2| or |4|, etc.

but not

|4|
|1|

right?
 
Last edited:
  • #17
Right.
 
  • #18
hnicholls said:
Great. So using any scalar multiple of the basis eigenvector, we could have for this eigenvalue 5:

|4| or |8|
|2| or |4|, etc.

but not

|4|
|1|

right?

Yes, with the exception of scalar multiplication by 0, as that would result in the zero vector. Also, it's your choice which pair of eigenvectors to use as a basis.
 
  • #19
In this particular example, yes.
 
Back
Top