Can P(F) be written as a direct sum of two subspaces?

bjgawp
Messages
84
Reaction score
0
I'm going through Axler's book and just got introduced the concept of sums of subspaces and the direct sums.

Here's one of the examples he has.

Let P(F) denote all polynomials with coefficients in F where F is a field.
Let U_e denote the subspace of P(F) consisting of all polynomials of the form p(z) = \sum_{i=0}^n a_{2i}z^{2i}

and let U_o denote the subspace of P(F) consisting of all polynomials p of the form: p(z) = <br /> \sum_{i=0}^{n} a_{2i+1}z^{2i+1}

You should verify that P(F) = U_e \oplus U_o

Now the other examples he had were kind of trivial (such as \mathbb{R}^2 = U \oplus W where U = \{ (x,0) | x \in \mathbb{R} \} and W = \{(0,y) | y \in \mathbb{R} \}) since we simply showed the uniqueness of each component.

For this one, I get rattled up in notation and not sure where to head. Here's what I was thinking

let q \in U_e + Uo. Then: q(x) = \underbrace{a_0 + a_2 x^2 + \cdots + a_{2n}x^{2n}}_{\in U_e} + \underbrace{a_1x + a_3x^3 + \cdots a_{2n+1}x^{2n+1}}_{\in U_o}

Then I assume a(x) can be represented by different coefficients, say b_0, b_1, ..., b_n. Since both these representations are equal to a(x), then the two representations are equal. Then we equate the coefficients and we arrive at a contradiction.

Sound good? Sorry for the long winded post. I appreciate all the help so far! (And that last thread .. that was pretty silly of me to mess up!)
 
Physics news on Phys.org
I don't think unique representation of a sum of elements is a good way to go here. Instead you can prove these two things:
U_e + U_o = P(F)

and

U_e \cap U_o = \emptyset

Which requires a far smaller combination of handwaving/slogging through ridiculous details. Often (not always) this is the easier way of proving two things are the direct sum of a vector space. Although what you've posted is pretty good also
 
Oh thanks a lot! I just noticed they had this proved a few pages after. Ah well different means to the same goal I suppose.
 
Sorry to bring up an old thread. I was just wondering how Office_Shredder's approach works.

Specifically, how do i directly show that U_e \cap U_o = \bold{0} ?

Would I suppose a(x) \in U_e \cap U_o and come to the conclusion that the only way this could occur was if the coefficients were equal to 0?

a_0 + a_2x^2 + a_4x^4 + \cdots + a_{2n}x^{2n} = a_1 + a_3x^3 + \cdots + a_{2n+1}x^{2n+1}

On the LHS, the odd-superscripted coefficents is equal to 0 so the RHS must be equal to 0 as well. And similarly for the even-superscripted coefficients.

Would that be the way to go?
 
bjgawp said:
a_0 + a_2x^2 + a_4x^4 + \cdots + a_{2n}x^{2n} = a_1 + a_3x^3 + \cdots + a_{2n+1}x^{2n+1}

On the LHS, the odd-superscripted coefficents is equal to 0 so the RHS must be equal to 0 as well. And similarly for the even-superscripted coefficients.

Would that be the way to go?

Yes, that's right. You are implicitly using the fact that two polynomials are equal if and only if all of their coefficients are equal. This in turn is an immediate consequence of the fundamental theorem of algebra. I'm pretty sure Axler assumes this is "known" even though he doesn't prove it in the book; it's a rather deep result.
 
The use of polynomials before he introduces them and giving various theorems about them in chapter 4 is a little confusing. He does actually prove the Fundamental Theorem of Algebra (Thm 4.7), but in terms that I didn't understand, most likely because I haven't taken a class in analysis yet.
 
Back
Top