Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

Vector independence proof question

  1. Mar 4, 2009 #1
    prove that vectors [tex]v_1[/tex],..,[tex]v_n[/tex] on a vectorinc space V over feild F
    are linearly dependant if and only if there is an index 1<=i<=n
    so [tex]v_i[/tex] is a lenear combination of the previus vectors by its index

    i got a prove but i cant fully understand it:
    suppose v_i is a lenear combination of its previous

    we transfer v_i on the other side

    then they say that
    so it lenear dependant

    then they pick an index
    all the index are 0 except the first one which is not
    [tex]i_0=max(i|a_i\neq0)[/tex] for which a_i differs 0

    but its true only for i_0>=2

    so we get the expression

    so there is a lenear dependance and we proved it.

    the lecturer was in a hury
    can you fill the gaps
    make sence out of it
  2. jcsd
  3. Mar 4, 2009 #2
    What does it mean for a set of vectors to be linearly dependent?
  4. Mar 4, 2009 #3
    it means that there is no vector in this set which could be written as a manipulation
    of the other vectors
  5. Mar 4, 2009 #4


    Staff: Mentor

    That's your own definition. What is the exact definition? That is, a set of vectors v1, v2, v3, ... , vn in a vector space over a field F is linearly dependent iff _____________. You fill in the blank
  6. Mar 4, 2009 #5
    their determinant differs zero
    their row reduction doesnt give us a row of zeros
    their dim(Ker)=0

    thats the only option i can think of
  7. Mar 4, 2009 #6


    Staff: Mentor

    No, these are just vectors, not a matrix. Even if you created a matrix by entering these vectors as columns, there is no guarantee that the matrix would be square. A matrix has to be square in order for its determinant to be defined.
    As above, these are just vectors.
    Still no matrix
    How about the definition of linear dependence? You have a textbook, right? It has the definition.
  8. Mar 4, 2009 #7
    why do i need the definition??
  9. Mar 5, 2009 #8


    Staff: Mentor

    How in the world are you going to prove a statement like this:
    if you don't know what linear dependence means?
  10. Mar 5, 2009 #9
    i gave almost the complete prove
    do you get the idea?
  11. Mar 5, 2009 #10


    Staff: Mentor

    You're the one who doesn't understand the proof. How can you expect to understand a proof that involves linear independence/linear dependence if you don't know what these terms mean?

    I'm not asking you for the definition because I need to know it -- you need to know it.
  12. Mar 5, 2009 #11
    i know what the independent vector means
    " a set of vectors that, in a linear combination, can represent every vector in a given vector space or free module, and such that no element of the set can be represented as a linear combination of the others. In other words, a basis is a linearly independent spanning set.

    but its not helping with understanding this prove
  13. Mar 5, 2009 #12


    User Avatar
    Science Advisor

    There's part of your problem. That is NOT a definition of "independent", it is a definition of 'basis". What is the definition of "independent vectors"?
  14. Mar 5, 2009 #13
    that there is no linear combination of the three vectors that will add to zero unless the coefficients multiplying the three vectors (not their internal components) are individually zero.

    the only way

    how to use it?
  15. Mar 5, 2009 #14


    Staff: Mentor

    OK, this is essentially the definition of linear independence for three vectors. A bit more generally, a set of vectors {v1, v2, v3, ... , vn} in a vector space over a field F is linearly independent iff the only solution for the equation c1*v1 + c2*v2 + ... + cn*vn = 0 is c1 = c2 = ... = cn = 0.

    For the same set of vectors to be linearly dependent, the equation c1*v1 + c2*v2 + ... + cn*vn = 0 has a solution where at least one of the ci's is nonzero.

    Now, go back to your original post in this thread and see how this idea is being used.
  16. Mar 5, 2009 #15
    ok i heve this expression
    not all the vectors can be with 0 coefficient
    v_1 is not.(he has -1)

    why they pick maximal index for which the coefficient differs 0

  17. Mar 6, 2009 #16
    what to do next??
  18. Mar 6, 2009 #17


    Staff: Mentor

    Whoever wrote what you're reading is using the definition of linear dependence. The set {v1, v2, v3, ..., vn} is assumed to be linearly dependent, which means that the equation a1*v1 + a2*v2 + ... + an*vn = 0 has a solution where at least one ai is not 0. The definition of linear dependence guarantees that at least one such number is not zero, but it doesn't say which one. There might be just one constant ai or a bunch of them. The max() part says to pick the constant with the highest such index. Let's call it ak instead of what he uses, which is ai0. Then he moves that term to the other side of the equation to get
    -ak*vk = a1*v1 + a2*v2 + ... + a[k-1]*v[k-1] + a[k+1]*v[k+1] + ... + an*vn

    Since ak is not zero, you can divide both sides of the equation by it, thereby showing that vk is a linear combination of the other vectors.
  19. Mar 6, 2009 #18
    thanks i got it
Share this great discussion with others via Reddit, Google+, Twitter, or Facebook