Linear Algebra (Vector spaces, linear independent subsets, transformations)

jeff1evesque
Messages
312
Reaction score
0
Assignment question:
Let V = P (R) and for j >= 1 define T_j(f(x)) = f^j (x)
where f^j(x) is the jth derivative of f(x). Prove that the
set {T_1, T_2,..., T_n} is a linearly independent subset of L(V)
for any positive integer n.

I have no idea how V= P(R) has anything to do with the rest of the problem- in particular with the transformation T_j(f(x)) = f^j (x). I guess I just don't understand how V ties in with the definition of the transformation T.

2. Relevant ideas:
I heard two different professors each saying a different method of proving this problem. One said it was possible to prove it by contradiction, and another tried to help me prove it directly.

So when I started the method of contradiction I had:
Let B = {a_1T_1, a_2T_2, ..., a_nT_n}
Assume B is linearly dependent (goal: show a contradiction that its dependent?)
Choose T_i from B such that 1<= i <= n.
Therefore, T_i = B - a_iT_i

From here I couldn't continue. I have to show that the linear combination above (to the right of the equality) takes elements to the same ith derivative- and derive a contradiction here?
----------------

The second method involve find the bases of V. A professor said it was {1, x/1!, x^2/2!, ..., x^n/n!}. But that doesn't make sense to me. I thought the basis would be {0, x, x^2,... x^n}.
 
Last edited:
Physics news on Phys.org
I'm going to assume that P(R) is the vector space of polynomials with real coefficients under regular addition and scalar multiplication. If so, what do you mean by L(V)?
 
If the T's are linearly dependent then there is a linear combination a1*T1+a2*T2+...an*Tn that gives you zero when applied to any polynomial p(x), right? Ok, then apply that to specific polynomials and evaluate the result at x=0. Say p(x)=x. T1(x)=1, T2(x)=0, ... What does that tell you about a1? Suppose p(x)=x^2. T1(x^2)=2x, T2(x^2)=2, T2(x^2)=0. ... Now set x=0. What does that tell you about a2? Etc. Etc.
 
foxjwill said:
I'm going to assume that P(R) is the vector space of polynomials with real coefficients under regular addition and scalar multiplication. If so, what do you mean by L(V)?

V=P(R). I think L(V) is linear transformations from V->V.
 
foxjwill said:
I'm going to assume that P(R) is the vector space of polynomials with real coefficients under regular addition and scalar multiplication. If so, what do you mean by L(V)?

My notes from class says "it is the set of linear operations on V".
 
Dick said:
If the T's are linearly dependent then there is a linear combination a1*T1+a2*T2+...an*Tn that gives you zero when applied to any polynomial p(x), right? Ok, then apply that to specific polynomials and evaluate the result at x=0. Say p(x)=x. T1(x)=1, T2(x)=0, ... What does that tell you about a1? Suppose p(x)=x^2. T1(x^2)=2x, T2(x^2)=2, T2(x^2)=0. ... Now set x=0. What does that tell you about a2? Etc. Etc.

Dick, you are saying the exact same thing one of my professors was saying to me. However I don't understand what you mean "that give you zero when applied to any polynomial p(x)...


Thanks,


JL
 
They are operators in L(V). If they are linearly dependent then there's a linear combination of those operators that is IDENTICALLY zero. I.e. the combination of those operators applied to anything in the space is zero. That's what linearly dependent means. If you can show any combination must be nonzero someplace, then you've shown they are linearly independent.
 
Last edited:
Dick said:
They are operators in L(V). If they are linearly dependent then there's a linear combination of those operators that is IDENTICALLY zero. I.e. the combination of those operators applied to anything in the space is zero. That's what linearly dependent means. If you can show any combination must be nonzero someplace, then you've shown they are linearly independent.

Oh so p(x) is defined by P(R), where V = P(R)? But how do you know p(x) = x, how do you know the domain equals the image?

Thanks for the help- it takes me awhile to learn these things,


JL
 
S'ok. I don't know p(x)=x. p(x) could be anything. I'm just picking examples for p(x) that I can use to show the various a's must be zero.
 
  • #10
Dick said:
S'ok. I don't know p(x)=x. p(x) could be anything. I'm just picking examples for p(x) that I can use to show the various a's must be zero.

Oh ok I see kinda. But why are we setting x = 0 again? and I found a_1 = 1, a_2 = 2, a_3 = 6, a_4 = 24, ..., a_n = n! So I am guessing the set of all these values constitute the independent set? Would this reasoning along with what you provided give me enough to write a proof?

JL
 
  • #11
how did you get the a values?
i could be very wrong, but from Dick's comments i thought you were trying to show all the a's must be identically zero... this would give you a contradiction with your initial assumption that the set of operators are linearly dependent... and actually show they are linearly independent
 
  • #12
Just a comment on something you said in the first post that I don't believe anyone has caught:
The second method involve find the bases of V. A professor said it was {1, x/1!, x^2/2!, ..., x^n/n!}. But that doesn't make sense to me. I thought the basis would be {0, x, x^2,... x^n}.
You can never have 0 in a basis. 1, yes, but 0, no. Other than that, each vector (i.e., function) in your basis is a constant multiple of the corresponding member of the other basis.
 
  • #13
Thanks everyone, I solve this problem.


JL
 
Back
Top