Linear Independence: Polynomial Example

In summary, Linear Algebra Done Right textbook says that a list of vectors is linearly independent if every vector in the list can be expressed as a linear combination of other vectors in the list. However, this does not hold if any coefficient in the polynomial is nonzero.
  • #1
freshlikeuhh
13
0
Hello everyone. I was going through my Linear Algebra Done Right textbook that threw me off. I hope this forum is appropriate for my inquiry; while there is no problem I'm trying to solve here, I don't know whether just asking for clarification would belong to the homework forum instead. If this is the case, I apologize.

I'll start off by quoting the passage:

"For another example of a linearly independent list, fix a non-negative integer m. Then (1,z,...,zm) is linearly independent in P(F). To verify this, suppose that a0, a1,...,am, belonging to F are such that:

a0 + a1z + ... + amzm = 0, for every z belonging in F.

If at least one of the coefficients a0, a1,...,am were nonzero, then the above equation could be satisfied by at most m distinct values of z; this contradiction shows that all the coefficients in the above equation equal 0. Hence (1,z,...,zm) is linearly independent, as claimed."


Linear independence, as I understand it, holds only when each vector in a list of vectors has a unique representation as a linear combination of other vectors within that list. It is my interpretation that Axler is specifically using the fact that the {0} vector, in the above polynomial vector space example, can only be expressed by setting all coefficients to 0.

My confusion, I think, stems from how he concludes that all the coefficients must be zero. If any coefficient is nonzero, then the equation has, at most, m roots (I hope I am correctly relating this to the Fundamental Theorem of Algebra). But then, as I see it, this shows that the equation has more than one representation for {0} and is thus not linearly independent. But instead, he uses this same fact to obtain a contradiction and conclude that all the coefficients must equal 0.


Unfortunately, for some reason, the TeX editor did not work properly for me, so I had to resort to expressing some things here differently. Anyway, if anyone could shed some light and point me in the right direction, I would greatly appreciate it.
 
Physics news on Phys.org
  • #2
My confusion, I think, stems from how he concludes that all the coefficients must be zero. If any coefficient is nonzero, then the equation has, at most, m roots (I hope I am correctly relating this to the Fundamental Theorem of Algebra). But then, as I see it, this shows that the equation has more than one representation for {0} and is thus not linearly independent. But instead, he uses this same fact to obtain a contradiction and conclude that all the coefficients must equal 0.
There is your confusion. Yes, there may exist many values of z which make the numerical value of the function 0. But that is NOT what is being discussed here. To say that the polynomial [itex]a_0+ a_1z+ a_2z^2+ \cdot\cdot\cdot+ a_nz^n= 0[/itex] means that it is 0 for all z, not just for some.

If the polynomial [itex]a_0+ a_1z+ a_2z^2+ \cdot\cdot\cdot+ a_nz^n= 0[/itex] for all z, then, in particular, that is true for z= 0. Setting z= 0 in that gives [itex]a_0= 0[/itex]. But if it is 0 for all z, then it is a constant- its derivative is 0 for all z. It's derivative is [itex]a_1+ 2a_2z^1+ \cdot\cdot\cdot+ na_nz^{n-1}= 0[/itex] for all z. Setting z= 0 that gives [itex]a_1= 0[/itex]. But since that is 0 for all z, it is a constant and its derivative must be 0 for all z. We can proceed by induction to show that if a polynomial in z is 0 for all z, then all of its coefficients must be 0.
 
  • #3
Ah, that makes it all clear now. Thanks!
 
  • #4
(Slightly pedantic here: The statement that for a polynomial p over F, p = 0 if and only if p(a) = 0 for all a in F is only true if F is infinite. If F is finite, with elements {a1,...,aq}, then the polynomial (x - a1)...(x - aq) is nonzero but evaluates to zero at every element in F. Remember that a polynomial is defined by its coefficients; a polynomial is zero if and only if all its coefficients are zero.)
 
  • #5


Dear forum member,

Thank you for your inquiry. Linear independence is a fundamental concept in linear algebra and it is important to have a clear understanding of it. In this context, linear independence means that a list of vectors cannot be written as a linear combination of other vectors in the list. In the example given, the list of vectors (1,z,...,zm) is linearly independent because it cannot be written as a linear combination of other vectors in the list.

To prove this, we assume that there exist coefficients a0, a1,...,am such that a0 + a1z + ... + amzm = 0 for all z in F. This means that the polynomial a0 + a1z + ... + amzm is the zero polynomial, which has infinitely many roots. However, since the degree of the polynomial is at most m, it can have at most m distinct roots. This leads to a contradiction, showing that all the coefficients must be zero.

In other words, if any of the coefficients were nonzero, the polynomial would have more than m roots, which is not possible. Therefore, the only solution is for all the coefficients to be zero, making the list (1,z,...,zm) linearly independent.

I hope this clarifies the concept of linear independence in this context. If you have further questions or need more clarification, please do not hesitate to ask.
 

What is linear independence in the context of polynomials?

Linear independence refers to the property of a set of polynomials where none of the polynomials can be written as a linear combination of the others. In other words, no polynomial in the set can be created by adding, subtracting, or multiplying other polynomials in the set.

Why is linear independence important in polynomial examples?

Linear independence is important because it allows us to determine the dimension of a vector space. In the context of polynomials, linear independence also helps us to understand the relationships and dependencies between different polynomials.

How can I determine if a set of polynomials is linearly independent?

A set of polynomials is linearly independent if the only solution to the equation c1p1 + c2p2 + ... + cnpn = 0 is when all the coefficients (c1, c2, ..., cn) are equal to 0. In other words, the only way to combine the polynomials to equal 0 is by multiplying each polynomial by 0.

Can a set of linearly independent polynomials be linearly dependent in a different context?

Yes, a set of polynomials can be linearly independent in one context and linearly dependent in another. For example, a set of polynomials may be linearly independent over the real numbers, but linearly dependent over the complex numbers.

How is linear independence related to the basis of a vector space?

Linear independence is closely related to the concept of a basis in a vector space. A basis is a set of linearly independent vectors that span the entire vector space. In the context of polynomials, a set of linearly independent polynomials can form a basis for the vector space of polynomials.

Similar threads

  • Linear and Abstract Algebra
Replies
8
Views
873
  • Linear and Abstract Algebra
Replies
1
Views
890
  • Linear and Abstract Algebra
Replies
5
Views
1K
  • Linear and Abstract Algebra
Replies
4
Views
871
  • Linear and Abstract Algebra
Replies
12
Views
1K
  • Linear and Abstract Algebra
Replies
10
Views
1K
  • Linear and Abstract Algebra
Replies
3
Views
1K
  • Linear and Abstract Algebra
Replies
3
Views
1K
  • Linear and Abstract Algebra
Replies
8
Views
1K
  • Linear and Abstract Algebra
Replies
13
Views
2K
Back
Top