1. Not finding help here? Sign up for a free 30min tutor trial with Chegg Tutors
    Dismiss Notice
Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

Finding eigenvectors

  1. May 2, 2008 #1
    1. The problem statement, all variables and given/known data

    I have a matrix A = [3 -2; -3 2],
    then I peform det(A-λI) and find solutions λ=0 and λ=5.

    For the first case λ=0 I perform A-0I, which of course is just A [3 -2; -3 2]. I then reduce to row echelon form and find the values V1 = 2 and V2 = 3, therefore for λ=0 my eigenvector is [2; 3]

    For the second case λ=5, I perform A-5I, which gives [-2 -2; -3 -3]. Reducing to row echelon form yields [-6 -6; 0 0][V1; V2] = [0; 0]

    In linear form my, the equation is as follows, -6V1 - 6V2 = 0

    The answer provided is [V1; V2] = [-1; 1], my question is, how do you know for a case where the coeffients are the same, that the eigenvector will be [-1; 1] rather than [1;-1] as -6(1) - 6(-1) = 0 and still satifies the equation.

    Thanks
     
  2. jcsd
  3. May 2, 2008 #2

    Defennder

    User Avatar
    Homework Helper

    An infinite number of eigenvectors exists for every eigenvalue, though they may not be linearly independent. Both (1,-1) and (-1,1) are linear combinations of each other; they differ by a factor of -1 alone. Any multiple of this particular eigenvector would also satisfy the equation because you can cancel out the contants on both side of the matrix equation [tex]Ax=\lambda x[/tex]
     
  4. May 2, 2008 #3

    Dick

    User Avatar
    Science Advisor
    Homework Helper

    If v is an eigenvector, the c*v is also an eigenvector for ANY constant c. Since A(cv)=cA(v)=c*lambda*v=lambda*(cv). So [-1,1] and [1,-1] are both eigenvectors. You can choose either one. For the eigenvalue 0 you could also have found the eigenvector to be [-2,-3]. That's also fine.
     
  5. May 2, 2008 #4
    Thanks.
     
  6. May 3, 2008 #5
    I have another question. I have a matrix A = [0 -1 0; -4 0 0; 0 0 2]. I have found the characteristic equation to be (-1)^3(λ+2)(λ-2)^2 and hence my eigenvectors and λ=-2 and 2. In the answers it says the eigenvectors are [1; 2; 0], [1; -2; 0] and [0; 0; 1].

    How can you have three eigenvectors from two eigenvalues? Yes I realised there are two solutions λ=2, but surely they provide the same eigenvectors?

    Note: For λ=2 I found my eigenvector to be [1; -2; 0] and hence I am suspicious about the [0; 0; 1] eigenvector
     
  7. May 3, 2008 #6

    Defennder

    User Avatar
    Homework Helper

    There can be more than one linearly independent eigenvector for each eigenvalue. There's nothing wrong with that.
     
  8. May 3, 2008 #7
    Can you explain how you get the eigenvector [0; 0; 1] ?
     
  9. May 3, 2008 #8

    HallsofIvy

    User Avatar
    Staff Emeritus
    Science Advisor

    TRY! exactly like you did before.

    Although I prefer the more fundamental:
    [tex]\left[\begin{array}{ccc}0 & -1 & 0 \\ -4 & 0 & 0\\ 0 & 0 & 2\end{array}\right]\left[\begin{array}{c} x \\ y \\ z\end{array}\right]= \left[\begin{array}{c} 2x \\ 2y \\ 2z \end{array}\right][/tex]
    directly from the definition of "eigenvector".

    That gives three equations: -y= 2x, -4x= 2y, 2z= 2z . The last equation is true for all x, y, and z. From the first, y= -2x so -4x= 2(-2x) = -4x is true for any x, y satisfying y= -2x. That is <x, -2x, z> is an eigenvalue for any x or z. In particular, if you take x= 1, z= 0, you get <1, -2, 0> and if you take x= 0, z= 1, you get <0, 0, 1>.
     
  10. May 3, 2008 #9

    tiny-tim

    User Avatar
    Science Advisor
    Homework Helper

    Hi t_n_p! :smile:

    No. An n-repeated eigenvalue will have a whole n-dimensional subspace of eigenvectors.

    For example the nxn matrix In has n eigenvalues = 1, but obviously any nx1 vector will be an eigenvector! :smile:
    How did you get [1; -2; 0]? You'll probably find that the same method gave you [0; 0; 1], and you threw it away for some reason. :smile:
     
  11. May 3, 2008 #10
    Halls of Ivy - In the notes we don't learn this way

    Tiny tim - using the same method as I did before, obviously I get the same eigenvector if I'm using the same eigenvalue.

    Here's a recap of what I tried.
    -Got the characteristic equation as shown above
    -Subbed in the value of λ=2 into my matrix and setup the following..

    [-2 -1 0; -4 -2 0; 0 0 0][V1; V2; V3] = [0; 0; 0]

    -Then reduced to row echelon to form -4V1 - 2V2 + 0V3 = 0

    This is how I got the eigenvector [1; -2; 0]. Obviously I can see that [0; 0; 1] satifies the equation, in fact [0; 0; x] where x is any number will satisfy the equation, but how you get it using the way I found eigenvector [1; -2; 0] is what I wish to know.

    Thanks
     
  12. May 3, 2008 #11

    HallsofIvy

    User Avatar
    Staff Emeritus
    Science Advisor

    What definition of "eigenvector" did you learn?

    Yes, That gives V2= -2V1. But do you see that V3 can be anything and still satisfy that equation? You can choose V1 and V3 to be anything and then V2= -2V1. That's why the "eigenspace" corrresponding to [itex]\lambda= 2[/itex] is two dimensional. As I said, you can choose V1 and V3 to be anything you like. "1" and "0" are particularly easy and if you chose one to be 0 and the other to be 1, you are sure to get independednt vectors. If you take V1= 1 and V3= 0 you get <1, -2, 0> and if you take V1= 0, V3= 1, you get <0, 0, 1>.
     
    Last edited: May 3, 2008
  13. May 3, 2008 #12

    HallsofIvy

    User Avatar
    Staff Emeritus
    Science Advisor

    tiny-tim, that's not necessarily true. For example, the matrix
    [tex]\left[\begin{array}{cc}2 & 1 \\ 0 & 2\end{array}\right][/tex]
    has 2 as a "double eigenvalue" but all eigenvectors are of the form <x, 0>: only one dimensional. That's why it cannot be "diagonalized".
     
  14. May 3, 2008 #13
    Yes I see what you mean, my next question is now whether there is a more "mathematical" basis for obtaining the eigenvector <0, 0, x>, where x can be any number. Yes I can see that the eigenvector <0, 0, x> satisfies the equation by "eye", I'm just wondering if this is a more "logical" reasoning rather than a "mathematical" one. I hope you understand what I'm trying to say :tongue2:
     
  15. May 4, 2008 #14

    HallsofIvy

    User Avatar
    Staff Emeritus
    Science Advisor

    I think what you are trying to say is exactly what I said before:

    The definition of "eigenvector" is "v is an eigenvector of A, corresponding to eigenvector [itex]\lambda[/itex] if and only if [itex]Av= \lambda v[/itex]

    Using your example that leads to
    [tex]\left[\begin{array}{ccc}0 & -1 & 0 \\ -4 & 0 & 0\\ 0 & 0 & 2\end{array}\right]\left[\begin{array}{c} x \\ y \\ z\end{array}\right]= \left[\begin{array}{c} 2x \\ 2y \\ 2z \end{array}\right][/tex]
    and, as I said before, equations -y= 2x, -4x= 2y, 2z= 2z. The first two are satisfied as along as y= -2x, the third for all z. Any eigenvector is of the form <x, -2x, z> . You are free to choose any values for x and z. You can "separate" the two by taking x and z, in turn to be 0: <0, 0, z> and <x, -2x, 0>. Even better, take z= 1 in the first and x= 1 in the second: <0, 0, 1> and <1, -2, 0>.


    You should have done that sort of thing before. For example: find a basis for the vector space of <x, y, z> satisfying x- y+ 2z= 0. I can solve that quickly for x as a function of y and z: x= y- 2z. Taking y= 1, z= 0, x= 1- 2(0)= 1 so <1, 1, 0> is in that space. Taking y= 0, z= 1, x= 0- 2(1)= -2 so <-2, 0, 1> is in that space and by using 0 and 1, I have guarenteed that they are independent: a basis is <-2, 0, 1> and <1, 1, 0>.
     
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook

Have something to add?



Similar Discussions: Finding eigenvectors
  1. Finding eigenvectors? (Replies: 1)

Loading...