1. Limited time only! Sign up for a free 30min personal tutor trial with Chegg Tutors
    Dismiss Notice
Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

Linear algebra proof (surprise!)

  1. Jun 18, 2009 #1
    1. The problem statement, all variables and given/known data
    Prove that if eigenvectors v1, v2...vn are such that for any eigenvalue c
    of A, the subset of all these eigenvectors belonging to c is linearly independent,
    then the vectors v1,v2..vn are linearly independent.


    2. Relevant equations



    3. The attempt at a solution
    One question: How does the fact that a subset is linearly independent
    prove that every vector is linearly independent? For example:
    if {v1...vi} is linearly independent, i<n but {v1.....vn} is not,
    then obviously the entire set is not linearly independent, though the
    subset is.
     
  2. jcsd
  3. Jun 18, 2009 #2

    Office_Shredder

    User Avatar
    Staff Emeritus
    Science Advisor
    Gold Member

    As an example to start you off:

    If vn was in the linear span of v1 through vn-1 and c was an eigenvalue for each vi i<n, then you can prove c is an eigenvalue for vn too fairly easily
     
  4. Jun 18, 2009 #3

    vn=-c*(a1v1+...+an-1vn-1) so taking some linear combination from <v1...vn-1>
    we have c*(a1v1+...+an-1vn-1) but since c(a1v1+...+an-1vn-1)-c(a1v1+...+an-1vn-1)
    (or c(a1v1+...+an-1vn-1)-vn)=0
    and we can't have c((a1v1+..+an-1vn-1)-(a1v1+...+an-1vn-1))=0 since 0 cannot
    be an eigenvector, (c-c)(a1v1+...+an-1vn-1)=0*v1+...+0*vn-1=0 so v1..vn is independent
     
  5. Jun 18, 2009 #4

    Dick

    User Avatar
    Science Advisor
    Homework Helper

    Why are you even bothering to write this junk? You know that's not a 'proof', right? There is nothing in there that even resembles being a proof. If you can give me a reason why that is NOT a proof, I might give you a hint. You really have to start learning how to recognize when you are writing something that is void of content. Sorry to be harsh, but I'm getting tired of this.
     
  6. Jun 18, 2009 #5
    it isn't a proof because its not clear? It still doesn't make sense how a subset being
    linearly independent guarantees that the set it is in is linearly independent, if my proof
    isn't right.
     
  7. Jun 18, 2009 #6

    Dick

    User Avatar
    Science Advisor
    Homework Helper

    That's not good enough. Your first line doesn't even have a "since" before it or a "then" after it. A proof runs like, if A then B, if B then C etc etc. And at the end if Y then Z Where Z is what you want to prove. Your 'proof' shares none of that structure. It's a random fusillade of symbols. It is not a proof. It's not even close. You talk about eigenvectors and you never even mentioned the operator A or the eigenvalues. How is that possible?
     
  8. Jun 18, 2009 #7
    actually I didn't really say how I got vn=-c*(a1v1+..+an-1vn-1).
    What I should've done was say how vn is in the span but adding it to the
    basis to get a span{v1...vn} makes this set not linearly independent, so
    setting c(a1v1+...+an-1vn-1)+vn=0 gives vn=-c(a1v1+...+an-1vn-1)
    I guess that's one mistake-didn't be explicit enough.

    edit: You posted 6 before I posted 7 so I'll consider your explaination and I'll
    rewrite the proof.
     
  9. Jun 18, 2009 #8

    Dick

    User Avatar
    Science Advisor
    Homework Helper

    I didn't "explain" anything. I haven't even given you a hint yet. I want you to understand that that what you are writing is garbage and give me a good reason why before I give a hint. You have to start learning to introspect. I.e. read you own writing.
     
  10. Jun 18, 2009 #9

    Dick

    User Avatar
    Science Advisor
    Homework Helper

    Ok, I'll bite. So how is vn in the span? Of what? Why?
     
  11. Jun 18, 2009 #10

    Define a basis of the space V as <v1...vn-1> with dimV=n-1
    and with Avi=cvi (with 1<=i<=n-1, and with A being the operator).
    The set v1...vn-1 is linearly independent, by definition of a basis.
    Consider the span{v1....vn} and let vn be a vector
    within the span of V, so Avn=cvn as is Avi=cvi. Since
    <v1...vn-1> is within {v1...vn}, the basis is a subset of {v1...vn}. (ok I defined the variables here).
     
    Last edited: Jun 18, 2009
  12. Jun 18, 2009 #11

    Dick

    User Avatar
    Science Advisor
    Homework Helper

    You are making some progress. You did manage to actually define what you are talking about. It is true that if <v1...vn-1> are eigenvectors with eigenvalue c then any linear combination of them also has eigenvalue c. The proof could use a little work in terms of being explicit, but that's still ok. And after reading stuff you've posted before, I'm not going to diminish that accomplishment. Are you supposed to assume all of the basis vectors are eigenvectors? if so write c1*v1+...cn*vn=0 and operate on both sides of the equation with (A-cI) where c is an eigenvalue. What happens?
     
  13. Jun 18, 2009 #12
    That's good, Rome wasn't built in a day, but at least something is being built :biggrin:

    We have a linear combination from the span {v1...vn} as c1v1+...+cnvn.
    Applying A-cI to this equation gives A(c1v1+...+cnvn)-c(c1v1+...+cnvn)=0
    or A(c1v1+...+cnvn)=c(c1v1+...+cnvn). Is this what I should've done?
     
    Last edited: Jun 19, 2009
  14. Jun 19, 2009 #13

    Dick

    User Avatar
    Science Advisor
    Homework Helper

    I'm going to give you the idea in words. Start by writing 0=a1*v1+...+an*vn. You want to prove all of the ai's must be zero. If ALL of the v's corresponded to the SAME eigenvalue then you would be done. Because the problem says to assume that the subset of vectors corresponding to a single eigenvector is linearly independent. And if {v1...vn} were linearly independent, you know the ai's must be zero. Following this so far? The problem is that the vi's correspond to several DIFFERENT eigenvalues. Hmm, you should say, is there some way to eliminate all of the eigenvectors corresponding to all but one eigenvalue? Then you would have the much simpler case. Now how can you use (A-cI)v=0 if v is an eigenvector with eigenvalue c? In words, (A-cI) will wipe out any terms in the sum corresponding to the eigenvalue c.
     
    Last edited: Jun 19, 2009
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook




Similar Discussions: Linear algebra proof (surprise!)
  1. Linear Algebra Proof (Replies: 8)

  2. Linear Algebra proof (Replies: 21)

Loading...