DimKer(T) = 0 <--> T is injective

  • I
  • Thread starter Mr Davis 97
  • Start date
  • Tags
    Injective
In summary, the nullity theorem states that if T is an injective linear transformation, then nullity(T) = 0.
  • #1
Mr Davis 97
1,462
44
Let T be linear transformation from V to W. I know how to prove the result that nullity(T) = 0 if and only if T is an injective linear transformation.

Sketch of proof: If nullity(T) = 0, then ker(T) = {0}. So T(x) = T(y) --> T(x) - T(y) = 0 --> T(x-y) = 0 --> x-y = 0 --> x = y, which shows that T is injective. For the other direction, if T is injective, then then 0 must be the only element in the kernel, since it is always true for linear transformations that T(0) = 0, but since T is injective there is no other element in V that maps to the zero vector in W.

So there's the proof, but I still don't intuitively understand why the kernel only containing the zero vector means that T is injective, and vice versa. In contrast, the relation between the image of T and condition of being surjective is easy to see, since in order to map to all of the elements of W the image of T must have the same dimension as W. This can intuitively be seen with a diagram of the mapping from V to W, for example. I can't really imagine a diagram that plainly shows the injective condition.
 
Physics news on Phys.org
  • #2
Hi,

Mr Davis 97 said:
I still don't intuitively understand why the kernel only containing the zero vector means that T is injective

because if you have at least two element in the kernel that are diffent, example ##0## and ##v## (different from ##0##) then ##T(0)=T(v)=0## and ##T## is not injective, (by the logic rule if ##p\rightarrow q \Rightarrow \neg q \rightarrow \neg p##) so if ##T## is injective then you must have at most one element in the kernel (this is ##0##)...

Mr Davis 97 said:
I can't really imagine a diagram that plainly shows the injective condition.

This is a diagram with arrows from each element of ##V## to each element of ##W## without double arrow into a single point and with ##0_{V}## into ##0_{W}##.
 
Last edited:
  • #3
The proof works in both directions.

##\ker T = \{0\} \wedge T(x)=T(y) \Longrightarrow x=y## by linearity of ##T## the way you said.
Your formulation of the other direction is a bit more difficult to understand, because I couldn't clearly see the difference between cause and effect. So let's assume ##T## is injective and ##T(x)=0##. Then ##T(x)=0=T(0)## by linearity of ##T## and ##x=0## by injectivity, i.e. ##\ker T = \{0\} .\,##

Now for the general understanding of injectivity. I found it always easier than surjectivity, because an injective function can be considered as an embedding. Not necessarily without some changes, like for instance a rotation, but nevertheless an embedding. So all subs are injective mappings: subgroups, subspaces, submodules, subalgebras etc. Thus surjectivity (onto) is to hit all targets, eventually more than once, and injectivity (into) is to keep all elements and not destroy them by sending them on the same target. They must remain distinguishable.

A useful image you may keep in mind is ##U \stackrel{\iota}{\hookrightarrow} V \stackrel{\pi}{\twoheadrightarrow} W## or $$\{0\} \rightarrow U \rightarrow V \rightarrow W \rightarrow \{0\}$$
Here you have an injective embedding ##\iota : U \rightarrow V ## and a surjective projection ##\pi : V \rightarrow W##.
In the case of vector spaces and if ##\; U = im\,\iota = \ker \pi \; ## we get ##\; V \cong U \oplus W \cong U \oplus V/U##.

And we find the dimension formula again:
If ##\pi : V \rightarrow V## then ##\dim V = \dim U + \dim W = \dim \ker \pi + \dim im\, \pi \,.##

If I could reasonably draw on the computer I drew some pictures. Maybe you do it for yourself. Take a straight line and embed it in a plane, with ##(x,y)-##corodinates, e.g. the straight ##x=y##. This is already the embedding, the injective mapping (into) ##\mathbb{R} \rightarrow \mathbb{R}^2##. As a projection (surjective, onto) you can take the mapping of the whole plane onto the ##x-##axis, same as the projection in the cinema.
 
  • #4
Ssnow said:
Hi, I remember you the condition that ##V## and ##W## must be finite dimensional spaces, so under this assumption the theorem is true.
This is wrong what you've said..

##\ker T = \{0\} \Longleftrightarrow T \textrm{ injective }## is also true for infinite dimensional vector spaces.
 
  • #5
fresh_42 said:
This is wrong what you've said..

kerT={0}⟺T injective \ker T = \{0\} \Longleftrightarrow T \textrm{ injective } is also true for infinite dimensional vector spaces.

Yes thank you, I mean that for the nullity theorem at least ##V## must be finite dimensional in order to have the validity of the Grassmann formula that the OP mentions in its post ...
 
  • #6
fresh_42 said:
The proof works in both directions.

##\ker T = \{0\} \wedge T(x)=T(y) \Longrightarrow x=y## by linearity of ##T## the way you said.
Your formulation of the other direction is a bit more difficult to understand, because I couldn't clearly see the difference between cause and effect. So let's assume ##T## is injective and ##T(x)=0##. Then ##T(x)=0=T(0)## by linearity of ##T## and ##x=0## by injectivity, i.e. ##\ker T = \{0\} .\,##

Now for the general understanding of injectivity. I found it always easier than surjectivity, because an injective function can be considered as an embedding. Not necessarily without some changes, like for instance a rotation, but nevertheless an embedding. So all subs are injective mappings: subgroups, subspaces, submodules, subalgebras etc. Thus surjectivity (onto) is to hit all targets, eventually more than once, and injectivity (into) is to keep all elements and not destroy them by sending them on the same target. They must remain distinguishable.

A useful image you may keep in mind is ##U \stackrel{\iota}{\hookrightarrow} V \stackrel{\pi}{\twoheadrightarrow} W## or $$\{0\} \rightarrow U \rightarrow V \rightarrow W \rightarrow \{0\}$$
Here you have an injective embedding ##\iota : U \rightarrow V ## and a surjective projection ##\pi : V \rightarrow W##.
In the case of vector spaces and if ##\; U = im\,\iota = \ker \pi \; ## we get ##\; V \cong U \oplus W \cong U \oplus V/U##.

And we find the dimension formula again:
If ##\pi : V \rightarrow V## then ##\dim V = \dim U + \dim W = \dim \ker \pi + \dim im\, \pi \,.##

If I could reasonably draw on the computer I drew some pictures. Maybe you do it for yourself. Take a straight line and embed it in a plane, with ##(x,y)-##corodinates, e.g. the straight ##x=y##. This is already the embedding, the injective mapping (into) ##\mathbb{R} \rightarrow \mathbb{R}^2##. As a projection (surjective, onto) you can take the mapping of the whole plane onto the ##x-##axis, same as the projection in the cinema.
That clears things up about injectivity. However, I am still confused as to exactly how the kernel relates to the linear transformation being injective. For example, if the kernel is not just zero, then obviously the linear transformation is not injective since more than one element maps to 0. However, why does the kernel being only zero necessarily imply that the linear transformation is injective, from an intuitive standpoint?
 
  • #7
Intuitively the kernel contains all elements, that are "destroyed" by the mapping ##T##. Linearity allows us to move elements around by addition, so other elements ##x+\ker T = \{x+ t\,\vert \,t \in \ker T\}## are also mapped onto a single point. Not ##0## anymore, since we moved them by ##x##, but all onto ##T(x)##. So the dimension of the kernel tells us directly which dimensions are "destroyed". If the kernel is ##\{0\}##, then no dimensions get lost and all elements are included / embedded / contained / mapped into the codomain and still are distinguishable, which means injectivity. It's not by chance, that it is the same word as in injection.

So ##\ker T = \{0\}## means via the moving property called linearity, that all points map onto only one image point, which is the definition of injectivity. If you restrict an injective linear mapping (= monomorphism, = embedding, = inclusion) to its image, you get an isomorphism. Not necessarily the identity, since we could mirror it on some axis, or rotate it, but an isomorphic image without lost informations (points).

I don't know, whether this helps you. If not, maybe you could narrow it down a bit further.
 

What does it mean when DimKer(T) = 0?

When DimKer(T) = 0, it means that the dimension of the kernel of the linear transformation T is equal to 0. In other words, the kernel of T only contains the zero vector, indicating that T is a one-to-one (injective) function.

What is a linear transformation?

A linear transformation is a mathematical function that maps vectors from one vector space to another in a way that preserves vector addition and scalar multiplication. In other words, the image of a linear transformation is a vector space that is structurally similar to the original vector space.

What is the kernel of a linear transformation?

The kernel of a linear transformation is the set of all vectors in the domain that are mapped to the zero vector in the codomain. In other words, it is the set of all input vectors that result in an output of 0.

How is injectivity related to the kernel of a linear transformation?

If a linear transformation T is injective, it means that every vector in the domain is mapped to a unique vector in the codomain. This also means that the kernel of T is equal to 0, since there are no vectors in the domain that are mapped to 0.

Why is DimKer(T) = 0 a useful property for linear transformations?

DimKer(T) = 0 is a useful property because it guarantees that the linear transformation T is one-to-one (injective). This means that there are no inputs that result in the same output, making it easier to solve for the inverse of T and perform other mathematical operations.

Similar threads

  • Linear and Abstract Algebra
Replies
3
Views
1K
  • Linear and Abstract Algebra
Replies
10
Views
1K
  • Linear and Abstract Algebra
Replies
2
Views
866
Replies
5
Views
1K
  • Linear and Abstract Algebra
Replies
3
Views
1K
Replies
4
Views
872
  • Linear and Abstract Algebra
Replies
1
Views
1K
  • Linear and Abstract Algebra
Replies
11
Views
1K
  • Linear and Abstract Algebra
Replies
3
Views
1K
Replies
3
Views
2K
Back
Top