Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

Eigenvalue with multiplicity k resulting in k orthogonal eigenvectors?

  1. Aug 22, 2009 #1
    I am somewhat confused about this property of an eigenvalue when A is a symmetric matrix, I will state it exactly as it was presented to me.

    "Properties of the eigenvalue when A is symmetric.
    If an eigenvalue [tex]\lambda[/tex] has multiplicity k, there will be k (repeated k times),
    orthogonal eigenvectors corresponding to this root."

    So I decided to test this property with a few matrices and I encountered one particular matrix that may provide a counterexample to this property.


    ()^T == column vector

    A symmetric matrix: A= (5,4,2),(4,5,2),(2,2,2) with corresponding eigenvalue eigenvector pairs:
    (-1,1,0)^T,(-1/2,0,1)^T map to eigenvalue 1
    (2,2,1)^T map to eigenvalue 10

    eigenvalue 1 has multiplicity 2, the eigenvectors corresponding to 1: (-1,1,0)^T,(-1/2,0,1)^T , their dot product: (-1,1,0)^T (dot) (-1/2,0,1)^T = 1/2, not 0, therefore eigenvectors corresponding to this root are not orthogonal to eachother, however they are linearly idependent.

    So I asked the author of these notes if i perhaps misinterpreted this property, is the correct interpretation perhaps,

    If an eigenvalue has multiplicity k, there will be k (repeated k times), eigenvectors corresponding to this root that are orthogonal to eigenvectors corresponding to different roots?

    and the author replied

    "Yes, you are right that the two eigenvectors corresponding root one are not orthogonal. But the property says there ARE orthogonal ones. So you can keep looking for orthogonal ones.

    How about the pair of (1,-1,0) and (1,1,-4) instead of the pair of (1,-1,0) and (-1/2,0,1)?"

    So just to restate and compare

    If an eigenvalue [tex]\lambda[/tex] has multiplicity k, there will be k (repeated k times),
    orthogonal eigenvectors corresponding to this root."

    but the author says "that the two eigenvectors corresponding root one are not orthogonal"
    and then says that the property says "there ARE orthogonal ones. So you can keep looking for orthogonal ones."

    Can someone help me make sense of this?

    - I found the eigenvectors corresponding to the eigenvalue(root),
    - there were two so the eigenvalue has multiplicity 2
    - however the 2 eigenvectors (corresponding to) eigenvalue (root) were not orthogonal v1 (dot) v2 != 0
    **** But I'm supposed to keep looking for orthogonal ones ???

    The property says:

    there will be k (repeated k times),
    orthogonal eigenvectors corresponding to this root."


    The author says:

    that the two eigenvectors corresponding root one are not orthogonal

    (The property says) != (The author says)

    but I'm supposed to keep looking for orthogonal ones?? I'm supposed to somehow derive the pair (1,-1,0) and (1,1,-4) instead of the pair of (1,-1,0) and (-1/2,0,1)????

    Does this make sense to anyone? If so, please help me make sense of it. and enlighten me as to how I can derive the pair (1,-1,0) and (1,1,-4). thanks in advance
     
  2. jcsd
  3. Aug 22, 2009 #2

    HallsofIvy

    User Avatar
    Staff Emeritus
    Science Advisor

    I'm not sure what part is causing you trouble. It is true that, for a symmetric matrix, eigenvalues corresponding to different eigenvalues will be orthogonal. But you seem to be under the impression that the two eigenvectors you found are the only eigenvectors. You quote the author as saying "Yes, you are right that the two eigenvectors corresponding root one are not orthogonal." (added emphasis). I doubt he/she said exactly that. More like he/she said that those two eigenvectors that you found were not orthogonal. It is just wrong to talk about "the" two eigenvectors. There are always an infinite number of eigenvectors for any eigenvalue.

    It is also true that the eigenvectors corresponding to a single eigenvalue of multiplicity k (again for a symmetric matrix) span a subspace of dimension k. Of course, you can always construct an orthonormal basis for the subspace so there will be k orthogonal eigenvectors.

    But any subspace will also contain eigenvectors that are NOT orthogonal. In your example, any eigenvectors corresponding to eigenvalue 1 must satisfy 2x+ 2y+ z= 0 or z= -2x- 2y. The easy way to get eigenvectors that span the "eigenspace" is take x= 1, y= 0 to get z= -2 giving an eigenvector <1, 0, -2> and then take x= 0, y= 1 to get z= -2 again giving another eigenvector <0, 1, -2>. Those are not orthogonal since their dot product is 4 but if, instead of x= 1, y= 0, you take x= -4, y= 5, you get z= 8- 10= -2 and so <-4, 5, -2> is an eigenvector. <1, 0, -2>.<-4, 5, -2>= 0. Those eigenvectors are orthogonal.
     
    Last edited: Aug 22, 2009
  4. Aug 22, 2009 #3

    Let me add a disclaimer to my post:
    I have not taken a linear algebra class, however the course I will be taking this fall09 semester uses a lot of linear algebra, where the main linear algebra techniques are covered in a preliminary pdf. I think I have done a good job of self learning many of the topics in that pdf, however in the resources I've been using there is nothing regarding properties of the eigenvalue when a matrix is symmetric or properties of eigenvalues and eigenvectors in general, other then how to derive them from a matrix and some other basic information .

    HallsofIvy,


    For not being sure what part is causing me trouble you really clarified many things for me. Unfortunately, I did quote the author word for word and he did say " the two eigenvectors corresponding root one." This was probably my main source of confusion, I was assuming there were only two eigenvectors, which I now know is wrong. I never encountered any information that explicitly stated "there are always an infinite number of eigenvectors for any eigenvalue" and the author's reply didn't shed much light on this either. I am familiar with many of the concepts you talk about in the rest of your post, I'll comment on the rest of your post after I've gone through it more thoroughly and gotten a handle on the infinite eigenvectors concept.
     
  5. Aug 22, 2009 #4
    I went back in the primary book i'm using and saw that i was dealing with a homogeneous system of linear equations, because all the constant terms are zero, therefore there are many solutions. Then I read further into deriving general solutions, in this case

    In terms of row vectors, we can express the general solution as: (-x₂-(1/2)x₃,x₂,x₃) and seperating the vairables in the general solution results in
    (-x₂-(1/2)x₃,x₂,x₃)=x₂(-1,1,0)+x₃(-1/2,0,1), however to make the vectors easier to work with we'll turn the fraction into a whole number by writing the general solution in terms of x₁ and x₂, i.e., x₁=-x₂-(1/2)x₃, -(1/2)x₃=x₁+x₂, x₃=-2x₁-2x₂, giving the general solution (x₁,x₂,-2x₁-2x₂) to get the basis, seperate the variables in the general solution
    (x₁,x₂,-2x₁-2x₂) = x₁(1,0,-2)+x₂(0,1,-2)

    For two vectors to be orthogonal they must have the propery that their dot product is equal to zero. For a subspace W with dimension 2 in a subspace of R³

    where the basis for W is the set of vectors{ (a1, a2, a3), (b1, b2, b3)}
    The vectors must satisfy a1*b1+ a2*b2+ a3*b3=0

    in our particular case the basis for the subspace is the set of vectors {(1,0,-2), (0,1,-2)}
    with the corresponding general solution (x₁,x₂,-2x₁-2x₂)

    to get two vectors orthogonal to eachother with the same basis solve for one of the variables in the equation x₁+x₂-4x₁x₂=0

    x₁= ((x₂)/(4x₂-1)) if x₂≠(1/4)

    plug in a value for x₁and x₂into the general solution (x₁,x₂,-2x₁-2x₂) that satisfy x₁= ((x₂)/(4x₂-1)) if x₂≠(1/4), then seperate the variables in the general solution and you got yourself two orthogonal vectors.
     
  6. Aug 24, 2009 #5
    I think i may have some inaccuracies in the above post.

    Nevertheless I've cleared up my confusion with these orthogonal eigenvectors and how they preserve all the properties of spaces. I saw how you plugged in values for x and y in order to cancel out the 4. However I did some more research and found a general way of obtaining an orthogonal set from a set of linearly independent vectors, and to then produce an orthonormal set. It's called the Gram-Schmidt orthogonalization process.
     
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook




Similar Discussions: Eigenvalue with multiplicity k resulting in k orthogonal eigenvectors?
Loading...