# Diagonalisation of a linear map

For the theorem: " If v1,.....,vr are eigenvectors of a linear map T going from vector space V to V, with respect to distinct eigenvalues λ1,...,λr, then they are linearly independent eigenvectors".
Are the λ-eigenspaces all dimension 1. for each λ1,......,λr.?
Is the dimension of V, r? ie dim(V)=r, ie their contains r elements in a basis for V.

I have another important question, is the matrix A representing the linear transformation T just the diagonal matrix (P-1AP=D, Where D contains the eigenvalues of T)? Not just in this case, but Always? This ones bugging me.

HallsofIvy
Homework Helper
If all eigenvectors corresponding to each eigenvalue $\lambda_r$ is a multiple of $v_r$, then, yes, its eigenspace has $\{v_r\}$ as a basis and so is one-dimensional. But it is not necessary that the eigenspace corresponding to a given eigenvalue be one-dimensional. For example the matrix
$$\begin{bmatrix}2 & 0 & 0 \\ 0 & 2 & 0 \\ 0 & 0 & 3\end{bmatrix}$$
has 2 and 3 as its eigenvalues. The eigvalue 3 has <0, 0, 1> and its multiples as eigenvectors and so its eigenspace is one dimensional. The eigenvalue 2, however, has any linear combination of <1, 0, 0> and <0, 1, 0> as eigenvectors so its eigenspace has dimension 2. Of course, that is a "diagonal" matrix and the sum of the dimensions of the eigenspace is equal to the dimension of the overall space,

The matrix
$$\begin{bmatrix}2 & 1 & 0 \\ 0 & 2 & 0 \\ 0 & 0 & 3\end{bmatrix}$$
also has 2 as a "double" eigenvalue but the only eigenvector corresponding to eigenvalue 2 is <1, 0, 0>. Of course, 3 is still an eigenvalue with eigenvector <0, 0, 1>.
Since there does NOT exist three independent eigenvectors, there does NOT exist a basis for the space consisting of eigenvectors and the matrix CANNOT be diagonalized.

A matrix can be diagonalized if and only if there exist a basis for the vector space consisting of eigenvectors of the matrix.

Last edited by a moderator:
"The eigvalue 3 has <0, 0, 1> and its multiples as eigenvectors and so its eigenspace is two dimensional." You mean dimension 1 here?

thanks for that but what about the second question? l will repeat:
"I have another important question, is the matrix A representing the linear transformation T just the diagonal matrix (P-1AP=D, Where D contains the eigenvalues of T)?
This ones bugging me."

If your transformation is in fact diagonalizable, then yes, there exists a basis such that the matrix A with respect to this basis is a diagonal matrix with the eigenvalues on it's diagonal. The matrix P that you use to conjugate A is the change of basis matrix, using the eigenvectors of T. This is also similar for triangularizable matrices.

HallsofIvy
Homework Helper
"The eigvalue 3 has <0, 0, 1> and its multiples as eigenvectors and so its eigenspace is two dimensional." You mean dimension 1 here?
Yes, thanks. I will edit my post to fix that.

thanks for that but what about the second question? l will repeat:
"I have another important question, is the matrix A representing the linear transformation T just the diagonal matrix (P-1AP=D, Where D contains the eigenvalues of T)?
This ones bugging me."
I'm not sure what you mean by that. "is A just the diagonal matrix"? No, A is not necessarily diagonal. IF there exist a basis for the vector space consisting of eigenvectors of T (if there is a "complete set of eigenvalues") then T written as a matrix using that basis is diagonalizable. if A is "diagonalizable" then, yes, $P^{-1}AP= D$ where D is a diagonal matrix with the eigenvalues on its diagonal and P is the matrix with the corresponding eigenvectors as columns.

But, as I said, not every matrix is diagonalizable.

Yes, thanks. I will edit my post to fix that.

I'm not sure what you mean by that. "is A just the diagonal matrix"? No, A is not necessarily diagonal. IF there exist a basis for the vector space consisting of eigenvectors of T (if there is a "complete set of eigenvalues") then T written as a matrix using that basis is diagonalizable. if A is "diagonalizable" then, yes, $P^{-1}AP= D$ where D is a diagonal matrix with the eigenvalues on its diagonal and P is the matrix with the corresponding eigenvectors as columns.

But, as I said, not every matrix is diagonalizable.

Ok so when T is diagonalisable then D is A only when we use the basis consisting of the eigenvectors of T to get A, so premultiplying A by P-1 and post-multiplying A by P has no effect on A, it remains the same?
What would happen if we were in the vector space R^n and used the standard basis for R^n to represent T where T : R^n → R^n. The matrix representing T wont necessarily be diagonal, right? So Its only when we use an eigenvector basis, then we get a diagonal matrix for T?

This sums up what i think you are saying, its from wiki:
"A linear map T : V → V is diagonalizable if and only if the sum of the dimensions of its eigenspaces is equal to dim(V), which is the case if and only if there exists a basis of V consisting of eigenvectors of T. With respect to such a basis, T will be represented by a diagonal matrix. The diagonal entries of this matrix are the eigenvalues of T."

Last edited:
HallsofIvy
It's very confusing when you say "D is A" or, in your previous post, " matrix A representing the linear transformation T just the diagonal matrix". A is NOT the same as D and, in this situtation, is not a diagonal matrix. A is similar to a diagonal matrix which simply means $$\displaystyle P^{-1}AP= D$$ for some invertible matrix P. Or, from the point of view of linear transformations, A and D are matrices corresponding to the same linear transformations in different bases.