- #1
- 13
- 0
Hello everyone. I was going through my Linear Algebra Done Right textbook that threw me off. I hope this forum is appropriate for my inquiry; while there is no problem I'm trying to solve here, I don't know whether just asking for clarification would belong to the homework forum instead. If this is the case, I apologize.
I'll start off by quoting the passage:
"For another example of a linearly independent list, fix a non-negative integer m. Then (1,z,...,zm) is linearly independent in P(F). To verify this, suppose that a0, a1,...,am, belonging to F are such that:
a0 + a1z + ... + amzm = 0, for every z belonging in F.
If at least one of the coefficients a0, a1,...,am were nonzero, then the above equation could be satisfied by at most m distinct values of z; this contradiction shows that all the coefficients in the above equation equal 0. Hence (1,z,...,zm) is linearly independent, as claimed."
Linear independence, as I understand it, holds only when each vector in a list of vectors has a unique representation as a linear combination of other vectors within that list. It is my interpretation that Axler is specifically using the fact that the {0} vector, in the above polynomial vector space example, can only be expressed by setting all coefficients to 0.
My confusion, I think, stems from how he concludes that all the coefficients must be zero. If any coefficient is nonzero, then the equation has, at most, m roots (I hope I am correctly relating this to the Fundamental Theorem of Algebra). But then, as I see it, this shows that the equation has more than one representation for {0} and is thus not linearly independent. But instead, he uses this same fact to obtain a contradiction and conclude that all the coefficients must equal 0.
Unfortunately, for some reason, the TeX editor did not work properly for me, so I had to resort to expressing some things here differently. Anyway, if anyone could shed some light and point me in the right direction, I would greatly appreciate it.
I'll start off by quoting the passage:
"For another example of a linearly independent list, fix a non-negative integer m. Then (1,z,...,zm) is linearly independent in P(F). To verify this, suppose that a0, a1,...,am, belonging to F are such that:
a0 + a1z + ... + amzm = 0, for every z belonging in F.
If at least one of the coefficients a0, a1,...,am were nonzero, then the above equation could be satisfied by at most m distinct values of z; this contradiction shows that all the coefficients in the above equation equal 0. Hence (1,z,...,zm) is linearly independent, as claimed."
Linear independence, as I understand it, holds only when each vector in a list of vectors has a unique representation as a linear combination of other vectors within that list. It is my interpretation that Axler is specifically using the fact that the {0} vector, in the above polynomial vector space example, can only be expressed by setting all coefficients to 0.
My confusion, I think, stems from how he concludes that all the coefficients must be zero. If any coefficient is nonzero, then the equation has, at most, m roots (I hope I am correctly relating this to the Fundamental Theorem of Algebra). But then, as I see it, this shows that the equation has more than one representation for {0} and is thus not linearly independent. But instead, he uses this same fact to obtain a contradiction and conclude that all the coefficients must equal 0.
Unfortunately, for some reason, the TeX editor did not work properly for me, so I had to resort to expressing some things here differently. Anyway, if anyone could shed some light and point me in the right direction, I would greatly appreciate it.