# Subspace Question

1. May 21, 2014

### noelo2014

1. The problem statement, all variables and given/known data

I've been stuck on this problem for a while, I actually have the answer (found it in my book), but I'm having trouble getting my head around the concept.

The question is:
Given a linear transformation T:V->W prove that kernel(T) : {vεV : TV=0W}

is a subspace of V

2. Relevant equations

3. The attempt at a solution

From what I understand the Kernel is defined as the set of vectors in V that map onto the zero vector in W when transformed under T

Also a subspace W of a vector space V is defined as 1 or more vectors from V that, when added together or scalar multiplied, produce another vector "in" W. (I'm not quoting my book exactly)

Ok, the book gives the proof for this as something like:

To show that ker(T) is a subspace we must show that it contains at least one vector and is closed under addition and scalar multiplication.

Since ker(T) contains 0V then

T(u+v)=T(u)+T(v)=0+0=0
and
T(cu)=T(0c)=T(0)=0
c.T(u)=0
∴ T(cu)=T(u)

which proves ker(T) is a subspace of V (although I don't understand this proof)

What I'm trying to understand is : If ker(T) refers to a set of vectors in V, isn't that enough proof in itself? Isn't it just like saying "Prove that a subset of V is a subspace of V?" In fact how would I prove that? Still confused about subspaces.

2. May 21, 2014

### CAF123

No, you would need to show that the vectors in ker T satisfy the three criteria listed in the solution. A subset is not the same as a subspace.

3. May 21, 2014

### Fredrik

Staff Emeritus
As CAF123 said, a subset isn't necessarily a subspace. A subspace of V is a subset of V that's also a vector space (with the addition and scalar multiplication operations inherited in the obvious way from the vector space). So the given subset is a subspace if and only if it satisfies the vector space axioms. But some of them are trivially satisfied, because the addition and scalar multiplication operations are inherited from V. For example, it's obvious that x+y=y+x for all x,y in the subset, since x+y=y+x for all x,y in V. If you examine the definition of a vector space, you should see that all you have to check to verify that a given subset S (with addition and scalar multiplication inherited from V) is a subspace of V is that S is non-empty and closed under addition and scalar multiplication.

In your case, it's pretty obvious that the subset (i.e ker T) contains the zero vector of V, and therefore isn't empty. So all you need to do is to verify that ker T is closed under addition and scalar multiplication. In other words, what you want to prove is that for all x,y in ker T, and all real numbers a, x+y is in ker T and ax is in ker T. When the statement that you need to prove is a "for all" statement, it's usually a good idea to start the proof with a "let ___ be arbitrary" statement. In this case, "Let x and y be arbitrary elements of ker T. Let a be an arbitrary real number". Now you just need to prove that x+y and ax are in ker T.

Last edited: May 21, 2014
4. May 21, 2014

### noelo2014

Hi, Fredrik,

I still don't get it.

Thought I had it there though

5. May 21, 2014

### noelo2014

I'm going to get really basic here because I have to understand this.

From what I understand: the kernel of a linear transformation T, from a vector space V to another vector space W, is:

A set of vectors

which all have the property that:
when "put through" T, they will come out at the other end, in W, as the zero vector. This doesn't mean an element in V needs to be the zero vector, in fact it could be any vector if the transformation is the *zero transformation*. Or it could be a few vectors.

Basically ker(T) could be one of three things:
1. The zero vector in V
2. Every vector in V
3. The zero vector and 1 or more vectors in V (but limited to a finite amount)

Note that these are *all* subsets of V.

Since this is a general proof just involving letters and not numbers, couldn't the question be rephrased from:

Given a linear transformation T:V->W prove that kernel(T) : {vεV : TV=0W} is a subspace of V

to

Given a linear transformation T:V->W prove that *A SUBSET OF V* : {vεV : TV=0W} is a subspace of V

Since we no nothing about ker(T) except that it's a subset of V aren't these 2 questions then logically equivalent?

Then take 2 arbitrary vectors from V:
[v1,v2,v3...vn] and [u1,u2,u3....un]

, add them together, and see if they're in V

so now I have [v1+u1,v2+u2,v3+u3......vn+un]

is this in V?

, then take 1 arbitrary vector from V:
[v1,v2,v3...vn] and multiply this by a scalar c giving [cv1,cv2,cv3....cvn]
and checking to see if this is in V...

is this in V?

Last edited: May 21, 2014
6. May 21, 2014

### micromass

Staff Emeritus
Right.

No. 3 is incorrect. You are probably working with $\mathbb{R}$-vector spaces. In that case, the kernel can never have a finite amount of vectors in it (if the kernel does not consist only out of the zero vector.

In fact, the kernel can consist out of infinitely many elements. For example, take $T:\mathbb{R}^2\rightarrow \mathbb{R}^2$ such that $T(x,y) = (x,0)$. This is a linear transformation and the kernel is all vector of the form $(0,y)$. So $(0,1)$ and $(0,3)$ is in the kernel. You see that the kernel is infinite, but that not every vector in $\mathbb{R}^2$ is in the kernel.

Not true. We can have many more subsets in $V$.

No, not every subset is a subspace.

That the kernel is a subset is the easy part. We must still show that it is a subspace. A subset is called a subspace if three things are true:
1) It contains the zero vector
2) It is closed under sums, so if two elements are in the subspace, then so is their sum
3) it is closed under scalar multiplications, so if an element is in the subspace, then so is any multiple

Arbitrary vectors in $V$ do not necessarily have this form. And we don't need to take arbitrary vectors from $V$, but we need to take arbitrary vectors from the kernel and see if their sum is in the kernel.

7. May 21, 2014

### Fredrik

Staff Emeritus
Both of these statements are a little weird, but since $\operatorname{ker} T=\{v\in V:Tv=0_W\}$, the following two statements are obviously equivalent:

1. ker T is a subspace of V.
2. $\{v\in V:Tv=0_W\}$ is a subspace of V.

Why not just call them v and u? There's no need to mention their components.

This follows immediately from the assumption that V is a vector space (and the definition of "vector space").

This too follows immediately from the assumption that V is a vector space. So you don't actually prove anything this way. However, if you start out saying that your arbitrary u and v are elements of ker T, rather than V, then you may find something more interesting.

8. May 22, 2014

### noelo2014

I'll do the vector addition part first:

Basically I need to show that 2 vectors: v and u (both elements of ker(T)) when added together will produce another vector (we'll call w). And w will itself always be an element of ker(T)

so..

T(u) = 0W (property of u being in the kernel)
T(v) = 0W (property of v being in the kernel)

hence

T(u)+T(v)=0W (because 0W+0W=0W)
now
T(u+v)=T(u)+T(v) (this is a property of a linear transformation so it's assumed to be true)
therefore
T(u+v)=0W (substituting 0W in place of T(u)+T(v))
therefore
(u+v) ε ker(T)

And for multiplication

T(u)=0W (property of u being in the kernel)
therefore
k.T(u)=0W (because the zero vector multiplied by anything will always give zero)
now
T(k.u)=k.T(u) (property of linear transformations)
therefore
T(k.u)=0W
therefore k.u ε ker(T)

I was under the impression that a subspace of V was something that had to "stay inside" V when in fact it's something that has to stay inside itself.

Thanks for the help guys.

9. May 22, 2014

### Fredrik

Staff Emeritus
Looks good. You can write the calculations like this if you want to:
\begin{align}
&T(u+v)=Tu+Tv=0_W+0_W=0_W\\
&T(ku)=kTu=k0_W=0_W
\end{align} Note however that $k0_W=0_W$ doesn't follow immediately from the assumption that W is a vector space. It's a theorem that's pretty easy to prove: Let W be an arbitrary vector space. For all real numbers k, we have $k0_W=k(0_W+0_W)=k0_W+k0_W$. If we add $-k0_W$ to both sides and use that addition is associative, we get $0_W=k0_W$.

Also, a minor nitpick: I would use the term "scalar multiplication" rather than "multiplication". Vector spaces are often sets of functions with addition and scalar multiplication defined by
\begin{align}
&(f+g)(x)=f(x)+g(x)\\
&(kf)(x)=k(f(x)).
\end{align} These spaces can be equipped with another operation called "multiplication" defined by
$$(fg)(x)=f(x)g(x).$$ So scalar multiplication and multiplication are different operations.

10. May 22, 2014

### BruceW

pretty nice. I guess the question is solved. Maybe there should also be a short sentence to say why W is nonempty.