Linear Independence: Polynomial Example

Click For Summary

Discussion Overview

The discussion revolves around the concept of linear independence in the context of polynomial vector spaces, specifically examining a claim from a textbook regarding the linear independence of the set (1, z, ..., z^m) in P(F). Participants explore the implications of the Fundamental Theorem of Algebra and the conditions under which a polynomial can be zero.

Discussion Character

  • Conceptual clarification
  • Debate/contested

Main Points Raised

  • One participant expresses confusion about the conclusion that all coefficients must be zero, questioning how a polynomial can be zero for all z if any coefficient is nonzero.
  • Another participant clarifies that the polynomial being zero for all z implies that it is a constant function, leading to the conclusion that all coefficients must be zero through a process of induction.
  • A later reply points out a limitation in the initial claim, noting that the statement about polynomials being zero if they evaluate to zero at all points is only valid for infinite fields, highlighting a potential oversight in the discussion.

Areas of Agreement / Disagreement

Participants do not reach a consensus on the implications of the polynomial being zero for all z, as there is a disagreement regarding the validity of the claim in finite fields.

Contextual Notes

The discussion touches on the nuances of polynomial behavior in different types of fields, particularly the distinction between finite and infinite fields, which may affect the interpretation of linear independence in this context.

freshlikeuhh
Messages
13
Reaction score
0
Hello everyone. I was going through my Linear Algebra Done Right textbook that threw me off. I hope this forum is appropriate for my inquiry; while there is no problem I'm trying to solve here, I don't know whether just asking for clarification would belong to the homework forum instead. If this is the case, I apologize.

I'll start off by quoting the passage:

"For another example of a linearly independent list, fix a non-negative integer m. Then (1,z,...,zm) is linearly independent in P(F). To verify this, suppose that a0, a1,...,am, belonging to F are such that:

a0 + a1z + ... + amzm = 0, for every z belonging in F.

If at least one of the coefficients a0, a1,...,am were nonzero, then the above equation could be satisfied by at most m distinct values of z; this contradiction shows that all the coefficients in the above equation equal 0. Hence (1,z,...,zm) is linearly independent, as claimed."


Linear independence, as I understand it, holds only when each vector in a list of vectors has a unique representation as a linear combination of other vectors within that list. It is my interpretation that Axler is specifically using the fact that the {0} vector, in the above polynomial vector space example, can only be expressed by setting all coefficients to 0.

My confusion, I think, stems from how he concludes that all the coefficients must be zero. If any coefficient is nonzero, then the equation has, at most, m roots (I hope I am correctly relating this to the Fundamental Theorem of Algebra). But then, as I see it, this shows that the equation has more than one representation for {0} and is thus not linearly independent. But instead, he uses this same fact to obtain a contradiction and conclude that all the coefficients must equal 0.


Unfortunately, for some reason, the TeX editor did not work properly for me, so I had to resort to expressing some things here differently. Anyway, if anyone could shed some light and point me in the right direction, I would greatly appreciate it.
 
Physics news on Phys.org
My confusion, I think, stems from how he concludes that all the coefficients must be zero. If any coefficient is nonzero, then the equation has, at most, m roots (I hope I am correctly relating this to the Fundamental Theorem of Algebra). But then, as I see it, this shows that the equation has more than one representation for {0} and is thus not linearly independent. But instead, he uses this same fact to obtain a contradiction and conclude that all the coefficients must equal 0.
There is your confusion. Yes, there may exist many values of z which make the numerical value of the function 0. But that is NOT what is being discussed here. To say that the polynomial a_0+ a_1z+ a_2z^2+ \cdot\cdot\cdot+ a_nz^n= 0 means that it is 0 for all z, not just for some.

If the polynomial a_0+ a_1z+ a_2z^2+ \cdot\cdot\cdot+ a_nz^n= 0 for all z, then, in particular, that is true for z= 0. Setting z= 0 in that gives a_0= 0. But if it is 0 for all z, then it is a constant- its derivative is 0 for all z. It's derivative is a_1+ 2a_2z^1+ \cdot\cdot\cdot+ na_nz^{n-1}= 0 for all z. Setting z= 0 that gives a_1= 0. But since that is 0 for all z, it is a constant and its derivative must be 0 for all z. We can proceed by induction to show that if a polynomial in z is 0 for all z, then all of its coefficients must be 0.
 
Ah, that makes it all clear now. Thanks!
 
(Slightly pedantic here: The statement that for a polynomial p over F, p = 0 if and only if p(a) = 0 for all a in F is only true if F is infinite. If F is finite, with elements {a1,...,aq}, then the polynomial (x - a1)...(x - aq) is nonzero but evaluates to zero at every element in F. Remember that a polynomial is defined by its coefficients; a polynomial is zero if and only if all its coefficients are zero.)
 

Similar threads

  • · Replies 5 ·
Replies
5
Views
2K
  • · Replies 23 ·
Replies
23
Views
2K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 12 ·
Replies
12
Views
2K
  • · Replies 10 ·
Replies
10
Views
2K
  • · Replies 4 ·
Replies
4
Views
3K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 13 ·
Replies
13
Views
3K
  • · Replies 8 ·
Replies
8
Views
3K
  • · Replies 8 ·
Replies
8
Views
2K