1. Limited time only! Sign up for a free 30min personal tutor trial with Chegg Tutors
    Dismiss Notice
Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

On the definition linear independence and dependence

  1. Oct 25, 2009 #1
    This is not really a homework problem but it relates to a number of problems, so I thought this would be the most appropriate place to post it.

    1. The problem statement, all variables and given/known data

    The basic question is about how we define linear dependence in a vector space. For a vector space over some field [tex]\mathbb{F}[/tex], we know that the vectors [tex]v_1,v_2,...,v_n[/tex] are linearly independent if the only solution to [tex] a_1v_1+a_2v_2+...+a_nv_n= 0[/tex] is [tex]a_1=a_2=...=a_n=0[/tex]. And if some list of vectors are not linearly independent, they are linearly dependent. This means if we can find constants [tex]b_1,..., b_n[/tex] that are NOT ALL ZERO where [tex]b_1v_1+b_2v_2+...+b_nv_n=0[/tex], these vectors are linearly dependent.

    Now when we think about the polynomial space (let's say over some field) [tex]P_m(\mathbb{F})[/tex], we need to reconsider these definitions right? I have never seen a different defintion of linear independence for the polynomial space but I'm assuming it would be like this:
    The vectors [tex]p_1(z),...,p_m(z)[/tex] are linearly independent if the only solution to [tex]c_1p_1(z)+...+c_mp_m(z)=0[/tex] FOR ALL z is [tex]c_1=...=c_m=0[/tex].

    And linear dependence would be this: If we can find constants [tex]c_1, ..., c_m[/tex] that are not all zero FOR SOME z where [tex]c_1p_1(z)+...+c_mp_m(z)=0[/tex], these vectors are linearly dependent.

    Notice that if linear dependence on the polynomial space is defined this way, it is actually the exact negation of linear independence in the polynomial space. But if linear dependence is defined in the similar way but FOR ALL z, then it is NOT the exact negation of linear independence in the polynomial space. And then, we start encountering vectors that are neither linearly independent nor linearly dependent.

    Please try to shed some light on this. Is my given definitions for linear independence and dependence for the polynomial space correct?

    2. Relevant equations
    Known definitions of linear independence and dependence are given above.

    3. The attempt at a solution
    There isn't really a solution. It is just a question about definitions.
    Last edited: Oct 25, 2009
  2. jcsd
  3. Oct 25, 2009 #2
    When you see an equation like [tex]c_{1}f_{1}(z) + ... + c_{n}f_{n}(z) = c[/tex] where [tex]c[/tex] is some constant, then [tex]r[/tex] is taken to be the function [tex]r(z) = r[/tex]. This takes care of all of the details because [tex]f(z) = 0(z)[/tex] iff [tex]f[/tex] is zero on all [tex]z[/tex] so if it is non-zero on at least one [tex]z[/tex] then it's not the zero function.
    Last edited: Oct 26, 2009
  4. Oct 25, 2009 #3


    Staff: Mentor

    Looks fine to me. You seem to get the fine point that for a linearly independent set of vectors, this equation has only one solution.
    [tex] a_1v_1+a_2v_2+...+a_nv_n= 0[/tex]

    The same equation for an arbitrary set of vectors, whether linearly independent or linearly dependent, always has the solution a1 = a2 = ... . an = 0. The key difference, and one that students have a hard time with, is whether this solution is the only one.

    Your definition for linear independence in a function space is fine, too. A key point there is that the equation has to hold for all values of the variable.
  5. Oct 25, 2009 #4
    I can't see how what you said is relevant in this case. I am talking about the specific coefficients that multiply the polynomials. Are they functions of z as well? (I think they aren't because it is called "scalar" multiplication) Can you give some more details please?

    And Mark44,
    so linear dependence in the vector space of polynomials is still the exact negation of linear independence. And its definition of it is:

    If we can find constants [tex]c_1, ..., c_m[/tex] that are not all zero FOR SOME z where [tex]c_1p_1(z)+...+c_mp_m(z)=0[/tex], these vectors are linearly dependent.


    If it is correct, I will post an example question and my solution to it making use of this fact.
    Last edited: Oct 26, 2009
  6. Oct 26, 2009 #5


    Staff: Mentor

    "... these functions are linearly dependent." Minor point, as functions in a function space are the counterparts of vectors in a vector space.
  7. Oct 26, 2009 #6
    I noticed that my notation was a bit confusing so I edited my previous post. the coefficients [tex]c_{i}[/tex] are constant but the polynomials [tex]f_{i}[/tex] are not. So when you take the linear combination [tex]\sum_{i} c_{i}f_{i}[/tex], the result is a function. If you set this equal to something, that something had better be a function as well. So functions [tex]f_{i}[/tex] are linearly dependent if there exists [tex]c_{i}[/tex], not all zero, such that [tex]\sum_{i} c_{i}f_{i}[/tex] is the zero function. A function is the zero function if and only if it is zero on every element of it's domain. It is not the zero function if it is non-zero on at least one element of it's domain.

    The point is that all the business about "at least"/"for all" is already taken care of in the definition of the zero function and the realization that zero means the zero function in this context.

    That being said, it is a good observation.
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook