1. Not finding help here? Sign up for a free 30min tutor trial with Chegg Tutors
    Dismiss Notice
Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

Eigenvector change of basis

  1. Sep 12, 2011 #1
    1. The problem statement, all variables and given/known data

    lets say i have a matrix A which is symmetric

    i diagonalize it , to P-1AP = D

    Question 1)
    am i right to say that the principal axis of D are no longer cartesian as per matrix A, but rather, they are now the basis made up of the eigen vectors of A? , which are the columns of P ?

    so if my diagonal D takes the form of say

    (1,0,0)
    (0,2,0)
    (0,0,3)

    Question 2)
    The 1, 2 , 3 are actually 1,2,3 units in a new basis formed by my respectively eigenvectors?

    i.e,

    (1,0,0)(eigenvector of A corresponding to eigenvalue 1)
    (0,2,0)(eigenvector of A corresponding to eigenvalue 2)
    (0,0,3)(eigenvector of A corresponding to eigenvalue 3)

    so that it is a (3x3) x (3x1) = (3x1) matrix

    in a same sense as in a cartesian system

    (1,0,0)(x)
    (0,2,0)(y)
    (0,0,3)(z)

    so that i get 1i + 2j + 3k right?

    Question 3
    if now my (eigenvector of A corresponding to eigenvalue 1) is given by (x,y,z)

    so that
    (1,0,0)(x1,y1,z1)
    (0,2,0)(x2,y2,z2)
    (0,0,3)(x3,y3,z3)

    so i get a 3x3 matrix?

    where the resulting 3 rows are my 3 eigenvectors making up the new basis except it has changed its magnitude due to the diagonal matrix. thats all right?

    so issn't this actually DPT? so its equal to PTA as per the very first point above? so issn't the rows of PT my 3 principal axis? does this step have any significance? i think there is ? but i can't seem to see any. what does DPT = PTA tell me?


    Question 4)

    so if now i have
    (1,2,3)(x1,y1,z1)
    (4,5,6)(x2,y2,z2)
    (7,8,9)(x3,y3,z3)

    this is telling me that i have 3 vectors which have components equal to the matrix product of the above right? the 3 vectors are the rows of the resultant matrix right?


    thanks a lot!
     
    Last edited: Sep 12, 2011
  2. jcsd
  3. Sep 12, 2011 #2

    HallsofIvy

    User Avatar
    Staff Emeritus
    Science Advisor

    I don't know what you mean by "no longer Cartesian". The principal axes of A or D are in the direction of the eigenvectors of A. The linear transformation represented by A in a given basis is represented by D in the basis made up of eigenvectors of A.

    Not exactly. The eigenvectors of A, in the original basis are the columns of the matrix P above. In the new coordinate system, in which the matrix representing A is diagonal, the basis vectors are the unit eigenvectors and so are (1, 0, 0) for eigenvalue 1, (0, 1, 0) for eigenvalue 2, and (0, 0, 1) for eigenvalue 3. Of course, any scalar multiple of an eigenvalue is also an eigenvalue so, yes, (1, 0, 0), (0, 2, 0), and (0, 0, 3) are eigenvalues. But so are (-45, 0, 0), (0, pi, 0), etc.

    Please don't use pronouns without antecedents! What does "it" refer to? Since your original matrix was 3 by 3, you have a linear transformation from a three dimensional vector space to itself. All linear transformations are represented by 3 by 3 matrices, all vectors by 3- columns. That has nothing to do with eigenvectors.

    Again, what do you mean by a "cartesian system"? A vector space with a given orthonormal basis?

    Yes, that is standard matrix multiplication. Again, it has nothing to do with eigenvectors.

    That makes no sense. Did you mean that "eigenvector correspondint to eigenvalue 1 is given by (x1, y1, z1), eigenvector corresponding to eigenvalue 2 is (x2, y2, z2), etc.? As I said before, in any basis a vector is represented by a 3-column (Because you are representing the linear operation as multiplication on the left by the matrix. If you had represented the linear operation as multplication on the right, that is as
    [tex]\begin{bmatrix}x & y & z\end{bmatrix}\begin{bmatrix}a_{11} & a_{12} & a_{13} \\ a_{21} & a_{22} & a_{23} \\ a_{32} & a{32} & a_{33}\end{bmatrix}= \begin{bmatrix}xa_{11}+ ya_{21}+ za_{31} & xa_{12}+ ya_{22}+ ax_{32} & xa_{13}+ ya_{23}+ za_{33}\end{bmatrix}[/tex]
    but, if you are representing linear transformations by a matrix, in no case is a vector also represented by a matrix.

    where the resulting 3 rows are my 3 eigenvectors making up the new basis except it has changed its magnitude due to the diagonal matrix. thats all right?

    No, as I said before, it is the columns of P that are the eigenvectors and so the principle axes. The principle axes of the a linear transformation are in the direction of its eigevectors no matter what the basis vectors are. "DPT= PTA" doesn't tell you anything and generally isn't true. What is true is what you wrote before: [itex]P^{-1}AP= D[/itex] whence [itex]AP= P^{-1}D[/itex]. Now, if you choose the eigenvectors to be orthonormal (which you can always if A is diagonalizable), then [itex]P^{-1}= P^T[/itex] so you have [itex]P^TD= AP[/itex] but that still is not "[itex]P^TD= AP^T[/itex]".

     
  4. Sep 12, 2011 #3
    i meant a Cartesian coordinate system span by the x,y,z unit basis vectors? or is it suppose to be i,j,k unit basis vectors?





    yes i meant that eigenvector corresponding to eigenvalue 1 is given by (x1, y1, z1), eigenvector corresponding to eigenvalue 2 is (x2, y2, z2) ...

    i was trying to draw the link between a cartesian coordinate system (given by the x,y,z basis) Versus the new coordinate system (given by the eigenvector of A basis)


    ok, lets say matrix A has eigenvectors

    V1 =
    (1)
    (2)
    (3)

    V2 =
    (4)
    (5)
    (6)

    V3 =
    (7)
    (8)
    (9)

    so , if after diagonalizing A, i find the matrix D which is say

    (1,0,0)
    (0,2,0)
    (0,0,3)

    so D11 , which is 1, is not x=1 on the cartesian coordinate system right? but rather 1 unit in the direction of the corresponding eigenvector V1 above right?

    what i am trying to do is since

    Ax = D11x , where x are eigenvectors of A, and D are eigenvalues of A

    so if i represent the eigenvalues of D in matrix form
    (1,0,0)
    (0,2,0)
    (0,0,3),

    i am trying to get Ax = 1x, Ax = 2x , Ax = 3x for my eigenvalue equations?

    so how will the eigenvectors look like if i follow the matrix multiplication rule, such that i get back my Ax=D11x , Ax = D22x and Ax = D33x

    is it just

    AP = DP ? where P is 3 columns of eigenvectors of A? does it work like this?

    or is this idea totally wrong? because i am representing a vector as a matrix???



    maybe i should ask this

    if i have a vector (1,1,1) , how do i decompose it into the new basis specified by

    V1 =
    (1)
    (2)
    (3)

    V2 =
    (4)
    (5)
    (6)

    V3 =
    (7)
    (8)
    (9)








    [/QUOTE]

    if P-1AP = D

    then multiply P-1 on the right of both sides

    P-1APP-1 = DP-1

    then P-1A = DP-1

    which is for orthonormal eigenvectors,

    PTA = DPT


    but how did you get from P-1AP= D to AP= P-1D ?
     
    Last edited: Sep 12, 2011
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook




Similar Discussions: Eigenvector change of basis
  1. Change of basis (Replies: 2)

  2. Change of Basis (Replies: 1)

  3. Change of basis (Replies: 3)

Loading...