Question about Linear Operator's Image and Kernel

canis89
Messages
29
Reaction score
0

Homework Statement



If T:V\rightarrow V is linear, then Ker(T^2)=Ker(T) implies Im(T^2)=Im(T).

Homework Equations



Let T:V\rightarrow V be a linear operator such that \forall x\in V,

T^2(x)=0\Rightarrow T(x)=0 (Ker(T^2)=Ker(T)).​

Prove that \forall x\in V, \exists u\in V\ni T(x)=T^2(u) (Im(T^2)=Im(T)).

The Attempt at a Solution



Any clue on where I should start? I'm really stuck at this problem and have been thinking about it for the past two days. The problem is I don't know how to use the assumption Ker(T)=Ker(T^2) .
 
Physics news on Phys.org
Try to prove the inclusions \text{im}(T) \subseteq \text{im}(T^2) and \text{im}(T) \supseteq \text{im}(T^2) separately. One inclusion should be easy and the other will require slightly more work.

There is a theorem from which this result is immediate, but you might not have seen it before...
 
I've managed to show that Im(T^2)\subseteq Im(T). The other direction is my problem. But somehow, it has struck me just now that maybe I can use the rank-nullity theorem to show it. The theorem implies that dim(Im(T^2))=dim(Im(T)). And since both of them are vector spaces, it suffices to show that two vector spaces with the same vector addition and scalar multiplication and also dimension are indeed equal.

Furthermore, since Im(T^2)\subseteq Im(T), it suffices to show that a subspace with the same dimension as its superspace (I don't know the correct term) is indeed equal to the latter. This leads to the question: does the set of linearly independent vectors (the basis) of the subspace span the superspace?
 
I have found the answer to the previous question. Let A be the basis of Im(T^2). IF if there exists v\in Im(T)-Span(A), then v is linearly independent from A and hence the number of linear independent vectors of Im(T^2) is plus one than the maximum number (the number of basis vectors). This is a contradiction. Hence, Im(T)=Span(A)=Im(T^2). Please let me know if I've made some mistakes.
 
Yes, I forgot to mention, V is finite dimensional. Oh, I'm not familiar with the theorem as I'm also not familiar with algebraic structures. Do you think that the problem can also be proven in the infinite dimensional case?
 
Last edited:
There are two things I don't understand about this problem. First, when finding the nth root of a number, there should in theory be n solutions. However, the formula produces n+1 roots. Here is how. The first root is simply ##\left(r\right)^{\left(\frac{1}{n}\right)}##. Then you multiply this first root by n additional expressions given by the formula, as you go through k=0,1,...n-1. So you end up with n+1 roots, which cannot be correct. Let me illustrate what I mean. For this...
Back
Top