Linear Independence: Polynomial Example

Click For Summary
The discussion focuses on the concept of linear independence in polynomial vector spaces, specifically addressing a passage from a Linear Algebra textbook. The confusion arises from the conclusion that all coefficients of a polynomial must be zero if the polynomial equals zero for all values of z. It is clarified that the polynomial being zero for all z implies that it is a constant function, leading to the conclusion that all coefficients must indeed be zero. Additionally, a distinction is made regarding the condition for polynomials over finite versus infinite fields, emphasizing that the original statement holds true only for infinite fields. This clarification resolves the initial confusion about linear independence in the context of polynomial equations.
freshlikeuhh
Messages
13
Reaction score
0
Hello everyone. I was going through my Linear Algebra Done Right textbook that threw me off. I hope this forum is appropriate for my inquiry; while there is no problem I'm trying to solve here, I don't know whether just asking for clarification would belong to the homework forum instead. If this is the case, I apologize.

I'll start off by quoting the passage:

"For another example of a linearly independent list, fix a non-negative integer m. Then (1,z,...,zm) is linearly independent in P(F). To verify this, suppose that a0, a1,...,am, belonging to F are such that:

a0 + a1z + ... + amzm = 0, for every z belonging in F.

If at least one of the coefficients a0, a1,...,am were nonzero, then the above equation could be satisfied by at most m distinct values of z; this contradiction shows that all the coefficients in the above equation equal 0. Hence (1,z,...,zm) is linearly independent, as claimed."


Linear independence, as I understand it, holds only when each vector in a list of vectors has a unique representation as a linear combination of other vectors within that list. It is my interpretation that Axler is specifically using the fact that the {0} vector, in the above polynomial vector space example, can only be expressed by setting all coefficients to 0.

My confusion, I think, stems from how he concludes that all the coefficients must be zero. If any coefficient is nonzero, then the equation has, at most, m roots (I hope I am correctly relating this to the Fundamental Theorem of Algebra). But then, as I see it, this shows that the equation has more than one representation for {0} and is thus not linearly independent. But instead, he uses this same fact to obtain a contradiction and conclude that all the coefficients must equal 0.


Unfortunately, for some reason, the TeX editor did not work properly for me, so I had to resort to expressing some things here differently. Anyway, if anyone could shed some light and point me in the right direction, I would greatly appreciate it.
 
Physics news on Phys.org
My confusion, I think, stems from how he concludes that all the coefficients must be zero. If any coefficient is nonzero, then the equation has, at most, m roots (I hope I am correctly relating this to the Fundamental Theorem of Algebra). But then, as I see it, this shows that the equation has more than one representation for {0} and is thus not linearly independent. But instead, he uses this same fact to obtain a contradiction and conclude that all the coefficients must equal 0.
There is your confusion. Yes, there may exist many values of z which make the numerical value of the function 0. But that is NOT what is being discussed here. To say that the polynomial a_0+ a_1z+ a_2z^2+ \cdot\cdot\cdot+ a_nz^n= 0 means that it is 0 for all z, not just for some.

If the polynomial a_0+ a_1z+ a_2z^2+ \cdot\cdot\cdot+ a_nz^n= 0 for all z, then, in particular, that is true for z= 0. Setting z= 0 in that gives a_0= 0. But if it is 0 for all z, then it is a constant- its derivative is 0 for all z. It's derivative is a_1+ 2a_2z^1+ \cdot\cdot\cdot+ na_nz^{n-1}= 0 for all z. Setting z= 0 that gives a_1= 0. But since that is 0 for all z, it is a constant and its derivative must be 0 for all z. We can proceed by induction to show that if a polynomial in z is 0 for all z, then all of its coefficients must be 0.
 
Ah, that makes it all clear now. Thanks!
 
(Slightly pedantic here: The statement that for a polynomial p over F, p = 0 if and only if p(a) = 0 for all a in F is only true if F is infinite. If F is finite, with elements {a1,...,aq}, then the polynomial (x - a1)...(x - aq) is nonzero but evaluates to zero at every element in F. Remember that a polynomial is defined by its coefficients; a polynomial is zero if and only if all its coefficients are zero.)
 
I am studying the mathematical formalism behind non-commutative geometry approach to quantum gravity. I was reading about Hopf algebras and their Drinfeld twist with a specific example of the Moyal-Weyl twist defined as F=exp(-iλ/2θ^(μν)∂_μ⊗∂_ν) where λ is a constant parametar and θ antisymmetric constant tensor. {∂_μ} is the basis of the tangent vector space over the underlying spacetime Now, from my understanding the enveloping algebra which appears in the definition of the Hopf algebra...

Similar threads

  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 5 ·
Replies
5
Views
2K
  • · Replies 4 ·
Replies
4
Views
2K
  • · Replies 8 ·
Replies
8
Views
3K
  • · Replies 23 ·
Replies
23
Views
2K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 12 ·
Replies
12
Views
2K
  • · Replies 19 ·
Replies
19
Views
4K
  • · Replies 4 ·
Replies
4
Views
3K
  • · Replies 10 ·
Replies
10
Views
2K