Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

Diagonalisation of a linear map

  1. Apr 18, 2010 #1
    For the theorem: " If v1,.....,vr are eigenvectors of a linear map T going from vector space V to V, with respect to distinct eigenvalues λ1,...,λr, then they are linearly independent eigenvectors".
    Are the λ-eigenspaces all dimension 1. for each λ1,......,λr.?
    Is the dimension of V, r? ie dim(V)=r, ie their contains r elements in a basis for V.

    I have another important question, is the matrix A representing the linear transformation T just the diagonal matrix (P-1AP=D, Where D contains the eigenvalues of T)? Not just in this case, but Always? This ones bugging me.
     
  2. jcsd
  3. Apr 18, 2010 #2

    HallsofIvy

    User Avatar
    Science Advisor

    If all eigenvectors corresponding to each eigenvalue [itex]\lambda_r[/itex] is a multiple of [itex]v_r[/itex], then, yes, its eigenspace has [itex]\{v_r\}[/itex] as a basis and so is one-dimensional. But it is not necessary that the eigenspace corresponding to a given eigenvalue be one-dimensional. For example the matrix
    [tex]\begin{bmatrix}2 & 0 & 0 \\ 0 & 2 & 0 \\ 0 & 0 & 3\end{bmatrix}[/tex]
    has 2 and 3 as its eigenvalues. The eigvalue 3 has <0, 0, 1> and its multiples as eigenvectors and so its eigenspace is one dimensional. The eigenvalue 2, however, has any linear combination of <1, 0, 0> and <0, 1, 0> as eigenvectors so its eigenspace has dimension 2. Of course, that is a "diagonal" matrix and the sum of the dimensions of the eigenspace is equal to the dimension of the overall space,

    The matrix
    [tex]\begin{bmatrix}2 & 1 & 0 \\ 0 & 2 & 0 \\ 0 & 0 & 3\end{bmatrix}[/tex]
    also has 2 as a "double" eigenvalue but the only eigenvector corresponding to eigenvalue 2 is <1, 0, 0>. Of course, 3 is still an eigenvalue with eigenvector <0, 0, 1>.
    Since there does NOT exist three independent eigenvectors, there does NOT exist a basis for the space consisting of eigenvectors and the matrix CANNOT be diagonalized.

    A matrix can be diagonalized if and only if there exist a basis for the vector space consisting of eigenvectors of the matrix.
     
    Last edited by a moderator: Apr 20, 2010
  4. Apr 18, 2010 #3
    "The eigvalue 3 has <0, 0, 1> and its multiples as eigenvectors and so its eigenspace is two dimensional." You mean dimension 1 here?

    thanks for that but what about the second question? l will repeat:
    "I have another important question, is the matrix A representing the linear transformation T just the diagonal matrix (P-1AP=D, Where D contains the eigenvalues of T)?
    This ones bugging me."
     
  5. Apr 18, 2010 #4
    If your transformation is in fact diagonalizable, then yes, there exists a basis such that the matrix A with respect to this basis is a diagonal matrix with the eigenvalues on it's diagonal. The matrix P that you use to conjugate A is the change of basis matrix, using the eigenvectors of T. This is also similar for triangularizable matrices.
     
  6. Apr 20, 2010 #5

    HallsofIvy

    User Avatar
    Science Advisor

    Yes, thanks. I will edit my post to fix that.

    I'm not sure what you mean by that. "is A just the diagonal matrix"? No, A is not necessarily diagonal. IF there exist a basis for the vector space consisting of eigenvectors of T (if there is a "complete set of eigenvalues") then T written as a matrix using that basis is diagonalizable. if A is "diagonalizable" then, yes, [itex]P^{-1}AP= D[/itex] where D is a diagonal matrix with the eigenvalues on its diagonal and P is the matrix with the corresponding eigenvectors as columns.

    But, as I said, not every matrix is diagonalizable.
     
  7. Apr 20, 2010 #6
    Ok so when T is diagonalisable then D is A only when we use the basis consisting of the eigenvectors of T to get A, so premultiplying A by P-1 and post-multiplying A by P has no effect on A, it remains the same?
    What would happen if we were in the vector space R^n and used the standard basis for R^n to represent T where T : R^n → R^n. The matrix representing T wont necessarily be diagonal, right? So Its only when we use an eigenvector basis, then we get a diagonal matrix for T?

    This sums up what i think you are saying, its from wiki:
    "A linear map T : V → V is diagonalizable if and only if the sum of the dimensions of its eigenspaces is equal to dim(V), which is the case if and only if there exists a basis of V consisting of eigenvectors of T. With respect to such a basis, T will be represented by a diagonal matrix. The diagonal entries of this matrix are the eigenvalues of T."
     
    Last edited: Apr 20, 2010
  8. Apr 21, 2010 #7

    HallsofIvy

    User Avatar
    Science Advisor

    It's very confusing when you say "D is A" or, in your previous post, " matrix A representing the linear transformation T just the diagonal matrix". A is NOT the same as D and, in this situtation, is not a diagonal matrix. A is similar to a diagonal matrix which simply means [math]P^{-1}AP= D[/math] for some invertible matrix P. Or, from the point of view of linear transformations, A and D are matrices corresponding to the same linear transformations in different bases.
     
Share this great discussion with others via Reddit, Google+, Twitter, or Facebook