- #1
Fractal20
- 74
- 1
Homework Statement
Suppose that T:W -> W is a linear transformation such that Tm+1 = 0 but Tm ≠ 0. Suppose that {w1, ... , wp} is basis for Tm(W) and Tm(uk) = wk, for 1 ≤ k ≤ p. Prove that {Ti(uk) : 0 ≤ i ≤ m, 1 ≤ j ≤ p} is a linearly independent set.
Homework Equations
The Attempt at a Solution
By definition of a basis w1, ..., wp are linearly independent. Now suppose Tm - 1(uk) is not linearly independent from the w's. Then it can be written as some sum of linear combinations of the w's which is equivalent to saying Tm-1(uk) = c1Tmu1 + ... + cpTmup. If both sides are left multiplied by T then we have Tm = the linear combination of Tm + 1 of the ui's which by definition are all 0. But then we have Tm(uk) = 0 = wk by definition but this is a contradiction since wk cannot be 0 if it is linearly independent of the other wi's.
From here, it seems like this same process can be applied backwards but I am not sure how it can be rigorously done in an elegant manner. I think I can use induction and say given that Tq(uk) is linearly independent from {Tr(uk): q + 1 ≤ r ≤ m, 1 ≤ k ≤ p} then Tq - 1(uk) is linearly independent for all k for if it wasn't then ... the same argument as the base case but applying repeated left multiples of T to keep creating 0 terms on the right eventually yielding the same contradiction of having wk = 0. This seems bulky and I am not confident it works. Also, I'm not use to using induction in a downwards trend and don't know if I would need to do an additional bottom base case for T1 specifically.
Is this weird attempt at induction valid? Do you have any more elegant approaches? Thanks!
For background, I am starting grad school in the fall and have to take a placement exam in linear algebra and vector calculus. This is a question off of one of the earlier placement exams.