Quick question about Linear Transformations from a space to itself

In summary, the Rank-Nullity theorem states that if a transformation is onto then its image is also onto.
  • #1
Fractal20
74
1
Hi, I have to take a placement exam in linear algebra this fall so I have been studying some past exams. This is a real basic question. If we have a linear transformation T:W -> W does this imply nothing about the injectivity or surjectivity of the transformation? I assume that it does not, but I get confused because if it is not surjective, then it seems like the image of T is not W but some subspace of W.

To phrase it in a different way, does T: W -> W only say that T maps vectors in W to other vectors in W and nothing about what the image in W is? Thanks!
 
Physics news on Phys.org
  • #2
Fractal20 said:
Hi, I have to take a placement exam in linear algebra this fall so I have been studying some past exams. This is a real basic question. If we have a linear transformation T:W -> W does this imply nothing about the injectivity or surjectivity of the transformation? I assume that it does not, but I get confused because if it is not surjective, then it seems like the image of T is not W but some subspace of W.

To phrase it in a different way, does T: W -> W only say that T maps vectors in W to other vectors in W and nothing about what the image in W is? Thanks!


Yes, you can say almost nothing about T without more information, but you can always be sure that if [itex]\,\dim W<\infty\,[/itex] , then T is 1-1 iff T is onto.
 
  • #3
Maybe one can say something if the scalar field is the complexes, namely there is at least one subspace of dimension one that maps into itself, which is actually another version of micromass's comment.
 
  • #4
Fractal20 said:
Hi, I have to take a placement exam in linear algebra this fall so I have been studying some past exams. This is a real basic question. If we have a linear transformation T:W -> W does this imply nothing about the injectivity or surjectivity of the transformation? I assume that it does not, but I get confused because if it is not surjective, then it seems like the image of T is not W but some subspace of W.

To phrase it in a different way, does T: W -> W only say that T maps vectors in W to other vectors in W and nothing about what the image in W is? Thanks!

Have you seen the Rank-Nullity theorem? Consider this:

Ti: ℝ4→ℝ4 ; i=1,2,..,5:

T1(x1,x2,x3,x4)=(0,0,0,0) . What is the dimension of the image

T2(x1,x2,x3,x4)=(x1,x1,x1,x1). Is it 1-1 ?

T3(x1,x2,x3,x4)=(x1,x2,0,0). Onto?

T4(x1,x2,x3,x4)=(x1,x2,x3,0) . Onto?
 
  • #5
Bacle2 said:
Have you seen the Rank-Nullity theorem? Consider this:

Ti: ℝ4→ℝ4 ; i=1,2,..,5:

T1(x1,x2,x3,x4)=(0,0,0,0) . What is the dimension of the image

T2(x1,x2,x3,x4)=(x1,x1,x1,x1). Is it 1-1 ?

T3(x1,x2,x3,x4)=(x1,x2,0,0). Onto?

T4(x1,x2,x3,x4)=(x1,x2,x3,0) . Onto?


Thanks, that makes it obvious. I guess I assumed as much but felt unsure in the sense of specificity of the image. ie for T1 it is really mapped to 0. So if W = the vector space spanned by x1, ... , x5 then in some ways I feel like saying T1 W -> W is misleading since it is really T: W -> 0. But I understand that it is about generality.
 

1. What is a linear transformation?

A linear transformation is a mathematical function that maps one vector space to another, while preserving the linear structure of the original space. This means that the transformation follows the rules of linearity, such as scaling and addition, and does not distort the space in any way.

2. What is a space to itself?

A space to itself refers to a linear transformation that maps a vector space onto itself. This means that the transformed vectors still belong to the same space that they originated from.

3. How is a linear transformation represented?

A linear transformation can be represented in various ways, such as using matrices, systems of equations, or function notation. The most common representation is through a matrix, where each column represents the transformation of the basis vectors.

4. What is the importance of linear transformations?

Linear transformations are important in various fields of science and mathematics, as they help to solve complex problems and simplify calculations. They are also used in computer graphics, engineering, and physics, among others.

5. What are some common examples of linear transformations?

Some common examples of linear transformations include rotations, reflections, and scaling in Euclidean space. Other examples include projection, shear, and stretching transformations. These transformations can also be combined to create more complex transformations.

Similar threads

  • Linear and Abstract Algebra
Replies
3
Views
995
  • Linear and Abstract Algebra
Replies
2
Views
826
  • Linear and Abstract Algebra
Replies
3
Views
181
  • Linear and Abstract Algebra
Replies
10
Views
241
  • Linear and Abstract Algebra
Replies
4
Views
2K
  • Linear and Abstract Algebra
Replies
1
Views
1K
Replies
5
Views
1K
  • Linear and Abstract Algebra
Replies
3
Views
947
  • Linear and Abstract Algebra
Replies
8
Views
771
  • Linear and Abstract Algebra
Replies
3
Views
964
Back
Top