Linear operators, eigenvalues, diagonal matrices

Click For Summary
The discussion revolves around the representation of a linear operator T in terms of its eigenvalues and corresponding matrices. It is established that T can be represented by a diagonal matrix if it has distinct eigenvalues, which are found by solving the equations derived from T's action on basis vectors. A confusion arises regarding the eigenvalues obtained from a specific basis arrangement, leading to a realization that the order of basis vectors affects the resulting matrix representation. The conversation clarifies that while the diagonal entries of a matrix can indicate eigenvalues, they do not guarantee distinct eigenvalues or the presence of a complete set of independent eigenvectors. Ultimately, it is emphasized that all n by n matrices have n eigenvalues when considering multiplicities and complex numbers, but not necessarily n independent eigenvectors.
bjgawp
Messages
84
Reaction score
0
So I have a couple of questions in regards to linear operators and their eigenvalues and how it relates to their matrices with respect to some basis.

For example, I want to show that given a linear operator T such that T(x_1,x_2,x_3) = (3x_3, 2x_2, x_1) then T can be represented by a diagonal matrix with respect to some basis of V = \mathbb{R}^3.

So one approach is to use a theorem that says: If T has dim(V) distinct eigenvalues, then T has a diagonal matrix with respect to some basis V.

So we simply look for the solutions of: \lambda x_1 = 3 x_3, \lambda x_2 = 2 x_2, \lambda x_3 = x_1

So we find that \lambda = 2, \pm \sqrt 3 with some corresponding eigenvectors.So we use the theorem and finish off the question.

So finally, my problem with all this. The matrix of T with respect to the standard basis {(0,0,1), (0,1,0),(1,0,0)} (purposely out of order) would be:

\begin{bmatrix} 3 & 0 & 0 \\ 0 & 2 & 0 \\ 0 & 0 & 1 \end{bmatrix}

which is diagonal right? But I have a theorem that says that the eigenvalues of T with respect to a uppertriangular matrix "consist precisely of the entries on the diagonal of that upper-triangular matrix".

But I didn't find 3 and 1 as eigenvalues. Is it because I switched the standard basis elements out of order? Why would that matter? And doesn't this last theorem imply that every linear operator precisely has dim V eigenvalues :S?

It was a long one but any comments are appreciated!
 
Physics news on Phys.org
bjgawp said:
So finally, my problem with all this. The matrix of T with respect to the standard basis {(0,0,1), (0,1,0),(1,0,0)} (purposely out of order) would be:

\begin{bmatrix} 3 & 0 & 0 \\ 0 & 2 & 0 \\ 0 & 0 & 1 \end{bmatrix}
This is wrong. If we call the basis vectors v_1,v_2,v_3 (in the order you've written them), we have for example

T_{31}=(Tv_1)_3=(3v_3)_3=3(v_3)_3=3\cdot 1=3

The third component of v_3 in the basis you have defined is 1, not 0, because v_3=0\cdot v_1+0\cdot v_2+1\cdot v_3.
 
bjgawp said:
So I have a couple of questions in regards to linear operators and their eigenvalues and how it relates to their matrices with respect to some basis.

For example, I want to show that given a linear operator T such that T(x_1,x_2,x_3) = (3x_3, 2x_2, x_1) then T can be represented by a diagonal matrix with respect to some basis of V = \mathbb{R}^3.

So one approach is to use a theorem that says: If T has dim(V) distinct eigenvalues, then T has a diagonal matrix with respect to some basis V.

So we simply look for the solutions of: \lambda x_1 = 3 x_3, \lambda x_2 = 2 x_2, \lambda x_3 = x_1

So we find that \lambda = 2, \pm \sqrt 3 with some corresponding eigenvectors.So we use the theorem and finish off the question.

So finally, my problem with all this. The matrix of T with respect to the standard basis {(0,0,1), (0,1,0),(1,0,0)} (purposely out of order) would be:

\begin{bmatrix} 3 & 0 & 0 \\ 0 & 2 & 0 \\ 0 & 0 & 1 \end{bmatrix}
No, if you use that order in the "domain", you must also use it in the "range".
T(0, 0, 1)= (3, 0, 0)= 0(0, 0, 1)+ 0(0, 1, 0)+ 3(1, 0, 0) so the first column will be
\begin{bmatrix}0 \\ 0 \\ 3\end{bmatrix}
T(1, 0, 0)= (0, 0, 1)= 1(0, 0, 1)+ 0(0, 1, 0)+ 0(1, 0, 0) so the third column will be
\begin{bmatrix} 1 \\ 0 \\ 0\end{bmatrix}

In this basis, the matrix corresponding to T is
\begin{bmatrix}0 & 0 & 1 \\0 & 2 & 0\\ 3 & 0 & 0\end{bmatrix}

which is diagonal right? But I have a theorem that says that the eigenvalues of T with respect to a uppertriangular matrix "consist precisely of the entries on the diagonal of that upper-triangular matrix".

But I didn't find 3 and 1 as eigenvalues. Is it because I switched the standard basis elements out of order? Why would that matter? And doesn't this last theorem imply that every linear operator precisely has dim V eigenvalues :S?

It was a long one but any comments are appreciated!
 
Ah what a silly mistake I forgot to write each vector as a linear combination of the basis elements.

But then I'm still confused about the whole eigenvalue issue. I know I must be misinterpreting the theorem. The elements along the diagonal are "precisely" the eigenvalues of T. Doesn't this mean that all linear operators have exactly dimV eigenvalues? (Obviously not but I don't see why..)

Thanks a lot!
 
Last edited:
The elements along the main diagonal of a diagonal matrix or a "Jordan Normal form" for a non-diagonalizable matrix are the eigenvalues of the matrix but in general, it is much harder to find the eigenvalues of a matrix than just looking at the main diagonal!

It is, however, true that allowing for multiplicities and allowing complex numbers, that any n by n matrix has n eigenvalues- because its eigenvalue equation is an nth degee polynomial which can be factored into n linear factors over the complex numbers.

An n by n matrix may not have n independent eigenvectors.
 
Ah right. The entries along the diagonal of a upper triangular matrix need not be distinct.

So then for example: T(x_1, x_2, x_3) = (2x_1, 2x_2, 3x_3)

We only have 2 eigenvalues here but with respect to the standard basis, we would still have a diagonal matrix just with repeated elements along the diagonal. Right?

That clears things up a lot more. Thanks!
 
Thread 'How to define a vector field?'
Hello! In one book I saw that function ##V## of 3 variables ##V_x, V_y, V_z## (vector field in 3D) can be decomposed in a Taylor series without higher-order terms (partial derivative of second power and higher) at point ##(0,0,0)## such way: I think so: higher-order terms can be neglected because partial derivative of second power and higher are equal to 0. Is this true? And how to define vector field correctly for this case? (In the book I found nothing and my attempt was wrong...

Similar threads

  • · Replies 3 ·
Replies
3
Views
3K
  • · Replies 4 ·
Replies
4
Views
3K
  • · Replies 2 ·
Replies
2
Views
3K
  • · Replies 3 ·
Replies
3
Views
2K
  • · Replies 33 ·
2
Replies
33
Views
1K
  • · Replies 4 ·
Replies
4
Views
2K
  • · Replies 3 ·
Replies
3
Views
3K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 2 ·
Replies
2
Views
3K
  • · Replies 1 ·
Replies
1
Views
2K