On the definition linear independence and dependence

In summary, the coefficients c_{1}..., c_{n} in the equation c_{1}f_{1}(z) + ... + c_{n}f_{n}(z) = c are constant, but the polynomials f_{i} are not. When you take the linear combination \sum_{i} c_{i}f_{i}, the result is a function.
  • #1
cartonn30gel
68
0
This is not really a homework problem but it relates to a number of problems, so I thought this would be the most appropriate place to post it.

Homework Statement



The basic question is about how we define linear dependence in a vector space. For a vector space over some field [tex]\mathbb{F}[/tex], we know that the vectors [tex]v_1,v_2,...,v_n[/tex] are linearly independent if the only solution to [tex] a_1v_1+a_2v_2+...+a_nv_n= 0[/tex] is [tex]a_1=a_2=...=a_n=0[/tex]. And if some list of vectors are not linearly independent, they are linearly dependent. This means if we can find constants [tex]b_1,..., b_n[/tex] that are NOT ALL ZERO where [tex]b_1v_1+b_2v_2+...+b_nv_n=0[/tex], these vectors are linearly dependent.

Now when we think about the polynomial space (let's say over some field) [tex]P_m(\mathbb{F})[/tex], we need to reconsider these definitions right? I have never seen a different defintion of linear independence for the polynomial space but I'm assuming it would be like this:
The vectors [tex]p_1(z),...,p_m(z)[/tex] are linearly independent if the only solution to [tex]c_1p_1(z)+...+c_mp_m(z)=0[/tex] FOR ALL z is [tex]c_1=...=c_m=0[/tex].

And linear dependence would be this: If we can find constants [tex]c_1, ..., c_m[/tex] that are not all zero FOR SOME z where [tex]c_1p_1(z)+...+c_mp_m(z)=0[/tex], these vectors are linearly dependent.

Notice that if linear dependence on the polynomial space is defined this way, it is actually the exact negation of linear independence in the polynomial space. But if linear dependence is defined in the similar way but FOR ALL z, then it is NOT the exact negation of linear independence in the polynomial space. And then, we start encountering vectors that are neither linearly independent nor linearly dependent.

Please try to shed some light on this. Is my given definitions for linear independence and dependence for the polynomial space correct?

Homework Equations


Known definitions of linear independence and dependence are given above.

The Attempt at a Solution


There isn't really a solution. It is just a question about definitions.
 
Last edited:
Physics news on Phys.org
  • #2
When you see an equation like [tex]c_{1}f_{1}(z) + ... + c_{n}f_{n}(z) = c[/tex] where [tex]c[/tex] is some constant, then [tex]r[/tex] is taken to be the function [tex]r(z) = r[/tex]. This takes care of all of the details because [tex]f(z) = 0(z)[/tex] iff [tex]f[/tex] is zero on all [tex]z[/tex] so if it is non-zero on at least one [tex]z[/tex] then it's not the zero function.
 
Last edited:
  • #3
Looks fine to me. You seem to get the fine point that for a linearly independent set of vectors, this equation has only one solution.
[tex] a_1v_1+a_2v_2+...+a_nv_n= 0[/tex]

The same equation for an arbitrary set of vectors, whether linearly independent or linearly dependent, always has the solution a1 = a2 = ... . an = 0. The key difference, and one that students have a hard time with, is whether this solution is the only one.

Your definition for linear independence in a function space is fine, too. A key point there is that the equation has to hold for all values of the variable.
 
  • #4
aPhilosopher,
I can't see how what you said is relevant in this case. I am talking about the specific coefficients that multiply the polynomials. Are they functions of z as well? (I think they aren't because it is called "scalar" multiplication) Can you give some more details please?

And Mark44,
so linear dependence in the vector space of polynomials is still the exact negation of linear independence. And its definition of it is:

If we can find constants [tex]c_1, ..., c_m[/tex] that are not all zero FOR SOME z where [tex]c_1p_1(z)+...+c_mp_m(z)=0[/tex], these vectors are linearly dependent.

Correct?

If it is correct, I will post an example question and my solution to it making use of this fact.
 
Last edited:
  • #5
cartonn30gel said:
aPhilosopher,
I can't see how what you said is relevant in this case. I am talking about the specific coefficients that multiply the polynomials. Are they functions of z as well? (I think they aren't because it is called "scalar" multiplication) Can you give some more details please?

And Mark44,
so linear dependence in the vector space of polynomials is still the exact negation of linear independence. And its definition of it is:

If we can find constants [tex]c_1, ..., c_m[/tex] that are not all zero FOR SOME z where [tex]c_1p_1(z)+...+c_mp_m(z)=0[/tex], these vectors are linearly dependent.

Correct?
"... these functions are linearly dependent." Minor point, as functions in a function space are the counterparts of vectors in a vector space.
cartonn30gel said:
If it is correct, I will post an example question and my solution to it making use of this fact.
 
  • #6
cartonn30gel said:
aPhilosopher,
I can't see how what you said is relevant in this case. I am talking about the specific coefficients that multiply the polynomials. Are they functions of z as well? (I think they aren't because it is called "scalar" multiplication) Can you give some more details please?

I noticed that my notation was a bit confusing so I edited my previous post. the coefficients [tex]c_{i}[/tex] are constant but the polynomials [tex]f_{i}[/tex] are not. So when you take the linear combination [tex]\sum_{i} c_{i}f_{i}[/tex], the result is a function. If you set this equal to something, that something had better be a function as well. So functions [tex]f_{i}[/tex] are linearly dependent if there exists [tex]c_{i}[/tex], not all zero, such that [tex]\sum_{i} c_{i}f_{i}[/tex] is the zero function. A function is the zero function if and only if it is zero on every element of it's domain. It is not the zero function if it is non-zero on at least one element of it's domain.

The point is that all the business about "at least"/"for all" is already taken care of in the definition of the zero function and the realization that zero means the zero function in this context.

That being said, it is a good observation.
 

1. What is the definition of linear independence?

Linear independence is a property of a set of vectors in a vector space, where no vector in the set can be written as a linear combination of the other vectors. In other words, each vector in the set adds new information or direction to the set, and none of the vectors can be redundant or expressed as a combination of the others.

2. How is linear dependence different from linear independence?

Linear dependence is the opposite of linear independence. A set of vectors is linearly dependent if at least one vector in the set can be written as a linear combination of the other vectors. This means that there is at least one redundant vector in the set, and the set does not add any new information or direction.

3. Can a set of two vectors be both linearly independent and dependent?

No, a set of two vectors cannot be both linearly independent and dependent. In order for a set to be linearly independent, there must be no linear combination of the vectors that results in the zero vector. However, for a set of two vectors to be linearly dependent, at least one of the vectors must be a multiple of the other, which would result in a linear combination equal to the zero vector.

4. How do you test for linear independence?

To test for linear independence, you can use the determinant method or the rank method. The determinant method involves creating a matrix with the vectors as columns and taking the determinant. If the determinant is non-zero, then the vectors are linearly independent. The rank method involves creating a matrix with the vectors as rows and finding the rank. If the rank is equal to the number of vectors, then the vectors are linearly independent.

5. Why is linear independence important in linear algebra?

Linear independence is important in linear algebra because it allows us to solve systems of linear equations and find unique solutions. If a set of vectors is linearly dependent, then there are infinite solutions to the system of equations, making it impossible to find a unique solution. Additionally, linear independence is a fundamental concept in vector spaces, and many other concepts in linear algebra, such as basis, span, and dimension, rely on it.

Similar threads

  • Calculus and Beyond Homework Help
Replies
1
Views
259
  • Calculus and Beyond Homework Help
Replies
0
Views
441
  • Calculus and Beyond Homework Help
Replies
2
Views
1K
  • Calculus and Beyond Homework Help
Replies
2
Views
972
  • Calculus and Beyond Homework Help
Replies
6
Views
1K
  • Calculus and Beyond Homework Help
Replies
6
Views
1K
  • Calculus and Beyond Homework Help
Replies
4
Views
1K
  • Calculus and Beyond Homework Help
Replies
11
Views
1K
  • Calculus and Beyond Homework Help
Replies
10
Views
2K
  • Calculus and Beyond Homework Help
Replies
7
Views
2K
Back
Top