Quick question about Linear Transformations from a space to itself

Click For Summary

Discussion Overview

The discussion revolves around the properties of linear transformations from a vector space W to itself, specifically addressing the implications for injectivity and surjectivity. Participants explore foundational concepts in linear algebra, including the Rank-Nullity theorem and examples of specific transformations.

Discussion Character

  • Exploratory
  • Technical explanation
  • Mathematical reasoning

Main Points Raised

  • One participant questions whether a linear transformation T: W -> W implies anything about its injectivity or surjectivity, expressing confusion about the image of T if it is not surjective.
  • Another participant states that without additional information, little can be concluded about T, but notes that if the dimension of W is finite, then T is one-to-one if and only if it is onto.
  • A different participant suggests that if the scalar field is the complex numbers, there exists at least one subspace of dimension one that maps into itself, referencing a previous comment.
  • One participant introduces specific transformations T1, T2, T3, and T4 as examples to illustrate the concepts of image dimension and properties like injectivity and surjectivity.
  • Another participant acknowledges the clarification provided by the previous responses, reflecting on the specificity of the image of T1 being the zero vector, and considers the generality of stating T: W -> W.

Areas of Agreement / Disagreement

Participants generally agree that more information is needed to determine the properties of T, but multiple views remain regarding the implications of specific examples and the generality of the transformation notation.

Contextual Notes

Participants reference the Rank-Nullity theorem and specific transformations without resolving the implications of these examples fully. There is an ongoing exploration of how the properties of transformations relate to their definitions and the dimensions of the spaces involved.

Fractal20
Messages
71
Reaction score
1
Hi, I have to take a placement exam in linear algebra this fall so I have been studying some past exams. This is a real basic question. If we have a linear transformation T:W -> W does this imply nothing about the injectivity or surjectivity of the transformation? I assume that it does not, but I get confused because if it is not surjective, then it seems like the image of T is not W but some subspace of W.

To phrase it in a different way, does T: W -> W only say that T maps vectors in W to other vectors in W and nothing about what the image in W is? Thanks!
 
Physics news on Phys.org
Fractal20 said:
Hi, I have to take a placement exam in linear algebra this fall so I have been studying some past exams. This is a real basic question. If we have a linear transformation T:W -> W does this imply nothing about the injectivity or surjectivity of the transformation? I assume that it does not, but I get confused because if it is not surjective, then it seems like the image of T is not W but some subspace of W.

To phrase it in a different way, does T: W -> W only say that T maps vectors in W to other vectors in W and nothing about what the image in W is? Thanks!


Yes, you can say almost nothing about T without more information, but you can always be sure that if [itex]\,\dim W<\infty\,[/itex] , then T is 1-1 iff T is onto.
 
Maybe one can say something if the scalar field is the complexes, namely there is at least one subspace of dimension one that maps into itself, which is actually another version of micromass's comment.
 
Fractal20 said:
Hi, I have to take a placement exam in linear algebra this fall so I have been studying some past exams. This is a real basic question. If we have a linear transformation T:W -> W does this imply nothing about the injectivity or surjectivity of the transformation? I assume that it does not, but I get confused because if it is not surjective, then it seems like the image of T is not W but some subspace of W.

To phrase it in a different way, does T: W -> W only say that T maps vectors in W to other vectors in W and nothing about what the image in W is? Thanks!

Have you seen the Rank-Nullity theorem? Consider this:

Ti: ℝ4→ℝ4 ; i=1,2,..,5:

T1(x1,x2,x3,x4)=(0,0,0,0) . What is the dimension of the image

T2(x1,x2,x3,x4)=(x1,x1,x1,x1). Is it 1-1 ?

T3(x1,x2,x3,x4)=(x1,x2,0,0). Onto?

T4(x1,x2,x3,x4)=(x1,x2,x3,0) . Onto?
 
Bacle2 said:
Have you seen the Rank-Nullity theorem? Consider this:

Ti: ℝ4→ℝ4 ; i=1,2,..,5:

T1(x1,x2,x3,x4)=(0,0,0,0) . What is the dimension of the image

T2(x1,x2,x3,x4)=(x1,x1,x1,x1). Is it 1-1 ?

T3(x1,x2,x3,x4)=(x1,x2,0,0). Onto?

T4(x1,x2,x3,x4)=(x1,x2,x3,0) . Onto?


Thanks, that makes it obvious. I guess I assumed as much but felt unsure in the sense of specificity of the image. ie for T1 it is really mapped to 0. So if W = the vector space spanned by x1, ... , x5 then in some ways I feel like saying T1 W -> W is misleading since it is really T: W -> 0. But I understand that it is about generality.
 

Similar threads

  • · Replies 1 ·
Replies
1
Views
3K
  • · Replies 6 ·
Replies
6
Views
2K
  • · Replies 3 ·
Replies
3
Views
3K
  • · Replies 3 ·
Replies
3
Views
2K
  • · Replies 2 ·
Replies
2
Views
2K
Replies
5
Views
2K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 3 ·
Replies
3
Views
1K
  • · Replies 6 ·
Replies
6
Views
5K
  • · Replies 7 ·
Replies
7
Views
3K