Diagonalisation of a linear map

In summary: S?In summary, if v1,...,vr are eigenvectors of a linear map T going from vector space V to V, with respect to distinct eigenvalues λ1,...,λr, then they are linearly independent eigenvectors. However, the matrix A representing the linear transformation T may not necessarily be diagonal. If T is diagonalizable, then A is a diagonal matrix with the eigenvalues on its diagonal.
  • #1
jam12
38
0
For the theorem: " If v1,...,vr are eigenvectors of a linear map T going from vector space V to V, with respect to distinct eigenvalues λ1,...,λr, then they are linearly independent eigenvectors".
Are the λ-eigenspaces all dimension 1. for each λ1,...,λr.?
Is the dimension of V, r? ie dim(V)=r, ie their contains r elements in a basis for V.

I have another important question, is the matrix A representing the linear transformation T just the diagonal matrix (P-1AP=D, Where D contains the eigenvalues of T)? Not just in this case, but Always? This ones bugging me.
 
Physics news on Phys.org
  • #2
If all eigenvectors corresponding to each eigenvalue [itex]\lambda_r[/itex] is a multiple of [itex]v_r[/itex], then, yes, its eigenspace has [itex]\{v_r\}[/itex] as a basis and so is one-dimensional. But it is not necessary that the eigenspace corresponding to a given eigenvalue be one-dimensional. For example the matrix
[tex]\begin{bmatrix}2 & 0 & 0 \\ 0 & 2 & 0 \\ 0 & 0 & 3\end{bmatrix}[/tex]
has 2 and 3 as its eigenvalues. The eigvalue 3 has <0, 0, 1> and its multiples as eigenvectors and so its eigenspace is one dimensional. The eigenvalue 2, however, has any linear combination of <1, 0, 0> and <0, 1, 0> as eigenvectors so its eigenspace has dimension 2. Of course, that is a "diagonal" matrix and the sum of the dimensions of the eigenspace is equal to the dimension of the overall space,

The matrix
[tex]\begin{bmatrix}2 & 1 & 0 \\ 0 & 2 & 0 \\ 0 & 0 & 3\end{bmatrix}[/tex]
also has 2 as a "double" eigenvalue but the only eigenvector corresponding to eigenvalue 2 is <1, 0, 0>. Of course, 3 is still an eigenvalue with eigenvector <0, 0, 1>.
Since there does NOT exist three independent eigenvectors, there does NOT exist a basis for the space consisting of eigenvectors and the matrix CANNOT be diagonalized.

A matrix can be diagonalized if and only if there exist a basis for the vector space consisting of eigenvectors of the matrix.
 
Last edited by a moderator:
  • #3
"The eigvalue 3 has <0, 0, 1> and its multiples as eigenvectors and so its eigenspace is two dimensional." You mean dimension 1 here?

thanks for that but what about the second question? l will repeat:
"I have another important question, is the matrix A representing the linear transformation T just the diagonal matrix (P-1AP=D, Where D contains the eigenvalues of T)?
This ones bugging me."
 
  • #4
If your transformation is in fact diagonalizable, then yes, there exists a basis such that the matrix A with respect to this basis is a diagonal matrix with the eigenvalues on it's diagonal. The matrix P that you use to conjugate A is the change of basis matrix, using the eigenvectors of T. This is also similar for triangularizable matrices.
 
  • #5
jam12 said:
"The eigvalue 3 has <0, 0, 1> and its multiples as eigenvectors and so its eigenspace is two dimensional." You mean dimension 1 here?
Yes, thanks. I will edit my post to fix that.

thanks for that but what about the second question? l will repeat:
"I have another important question, is the matrix A representing the linear transformation T just the diagonal matrix (P-1AP=D, Where D contains the eigenvalues of T)?
This ones bugging me."
I'm not sure what you mean by that. "is A just the diagonal matrix"? No, A is not necessarily diagonal. IF there exist a basis for the vector space consisting of eigenvectors of T (if there is a "complete set of eigenvalues") then T written as a matrix using that basis is diagonalizable. if A is "diagonalizable" then, yes, [itex]P^{-1}AP= D[/itex] where D is a diagonal matrix with the eigenvalues on its diagonal and P is the matrix with the corresponding eigenvectors as columns.

But, as I said, not every matrix is diagonalizable.
 
  • #6
HallsofIvy said:
Yes, thanks. I will edit my post to fix that. I'm not sure what you mean by that. "is A just the diagonal matrix"? No, A is not necessarily diagonal. IF there exist a basis for the vector space consisting of eigenvectors of T (if there is a "complete set of eigenvalues") then T written as a matrix using that basis is diagonalizable. if A is "diagonalizable" then, yes, [itex]P^{-1}AP= D[/itex] where D is a diagonal matrix with the eigenvalues on its diagonal and P is the matrix with the corresponding eigenvectors as columns.

But, as I said, not every matrix is diagonalizable.

Ok so when T is diagonalisable then D is A only when we use the basis consisting of the eigenvectors of T to get A, so premultiplying A by P-1 and post-multiplying A by P has no effect on A, it remains the same?
What would happen if we were in the vector space R^n and used the standard basis for R^n to represent T where T : R^n → R^n. The matrix representing T won't necessarily be diagonal, right? So Its only when we use an eigenvector basis, then we get a diagonal matrix for T?

This sums up what i think you are saying, its from wiki:
"A linear map T : V → V is diagonalizable if and only if the sum of the dimensions of its eigenspaces is equal to dim(V), which is the case if and only if there exists a basis of V consisting of eigenvectors of T. With respect to such a basis, T will be represented by a diagonal matrix. The diagonal entries of this matrix are the eigenvalues of T."
 
Last edited:
  • #7
It's very confusing when you say "D is A" or, in your previous post, " matrix A representing the linear transformation T just the diagonal matrix". A is NOT the same as D and, in this situtation, is not a diagonal matrix. A is similar to a diagonal matrix which simply means \(\displaystyle P^{-1}AP= D\) for some invertible matrix P. Or, from the point of view of linear transformations, A and D are matrices corresponding to the same linear transformations in different bases.
 

1. What is diagonalisation of a linear map?

Diagonalisation of a linear map is a process in linear algebra that transforms a linear map into a simpler form, known as a diagonal map. This simplification allows for easier analysis and computation of the linear map.

2. Why is diagonalisation of a linear map important?

Diagonalisation of a linear map is important because it allows for easier computation and analysis of the linear map. It also reveals important information about the properties of the map, such as its eigenvalues and eigenvectors.

3. What is the process of diagonalisation of a linear map?

The process of diagonalisation of a linear map involves finding a basis of eigenvectors for the map, constructing a diagonal matrix using the eigenvalues, and then transforming the original map into the diagonal map using the diagonal matrix.

4. When can a linear map be diagonalised?

A linear map can be diagonalised if and only if it has a full set of linearly independent eigenvectors. This is only possible for certain types of maps, such as symmetric or normal maps.

5. What are the benefits of diagonalisation of a linear map?

The benefits of diagonalisation of a linear map include simplification of the map, easier computation and analysis, and the ability to easily find the eigenvalues and eigenvectors of the map. It also allows for the identification of important properties of the map, such as symmetry and orthogonality.

Similar threads

Replies
4
Views
822
  • Linear and Abstract Algebra
Replies
3
Views
965
Replies
3
Views
2K
  • Linear and Abstract Algebra
Replies
3
Views
886
Replies
4
Views
1K
  • Linear and Abstract Algebra
Replies
1
Views
997
Replies
4
Views
2K
  • Linear and Abstract Algebra
Replies
1
Views
1K
  • Linear and Abstract Algebra
Replies
1
Views
1K
  • Linear and Abstract Algebra
Replies
5
Views
970
Back
Top