Understanding proof for theorem about dimension of kernel

In summary, the theorem states that if two finite dimensional vector spaces, ##U## and ##V##, are mapped so that ##T:U\to V## and ##S: V \to W##, then the dimension of the kernel of the operator ##T## is less than the dimension of the kernel of the operator ##S##.
  • #1
Incand
334
47
So the theorem says:
Suppose that ##U## and ##V## are finite dimensional vector spaces, and that ##T:U\to V##, ##S: V \to W##. Then
##\text{dim Ker }ST \le \text{dim Ker }S + \text{dim Ker }T##.

Proof:
Set ##U_0 = \text{Ker }ST## and ##V_0 = \text{Ker }S##. ##U_0## and ##V_0## are subspaces of ##U## and ##V##. Since ##ST=0## on ##U_0## we have ##T(U_0) \subset V_0##. We can then consider ##T## and ##S## as mappings defined on ##U_0## respectively ##V_0##. Since ##\text{Ran }T## (Range, column space, image etc.) is a subspace to ##V_0## we have ##\text{dim Ran }T \le \text{dim }V_0 = \text{dim Ker }S##. But according to the Rank-nullity theorem, ##\text{dim Ker }T + \text{dim Ran }T = \text{dim }U_0##.
Hence ##\text{dim Ker }ST = \text{dim }U_0 = \text{dim Ker T} + \text{dim Ran }T \le \text{dim Ker }T + \text{dim Ker }S##.

(I translated this so it's possible worse worded than originally.)

I can't really follow the steps in this proof, no matter how many times I look through it I don't follow all the steps. Starting it with
##U_0## and ##V_0## are subspaces of ##U## and ##V##

The word "and"" confuse me here. It's supposed to mean that ##U_0## to ##U## and ##V_0## to ##V## and nothing else right?

We can then consider ##T## and ##S## as mappings defined on ##U_0## respectively ##V_0##.

What does this mean? That for our purposes we can consider ##T:U_0 \to V## and ##S:V_0 \to W##? Why?

Since ##\text{Ran }T## is a subspace to ##V_0##

I understand this to be true if we indeed consider ##T:U_0 \to V## since by definition ##T(U_0)## maps into ##V_0## but not for ##U## in general.
And then ##\text{dim Ker }T + \text{dim Ran }T = \text{dim }U_0##. I don't get how we're allowed to only consider ##U_0## instead of ##U##.
 
Physics news on Phys.org
  • #2
Incand said:
I can't really follow the steps in this proof, no matter how many times I look through it I don't follow all the steps. Starting it with
##U_0## and ##V_0## are subspaces of ##U## and ##V##
The word "and"" confuse me here. It's supposed to mean that ##U_0## to ##U## and ##V_0## to ##V## and nothing else right?
Right. But the wording in is unclear in the passage. The word "respectively" should have been added.
We can then consider ##T## and ##S## as mappings defined on ##U_0## respectively ##V_0##.
What does this mean? That for our purposes we can consider ##T:U_0 \to V## and ##S:V_0 \to W##? Why?
The author actually takes the restriction of ##S## to ##V_0## and the restriction of ##T## to ##U_0##.

Since ##\text{Ran }T## is a subspace to ##V_0##
I understand this to be true if we indeed consider ##T:U_0 \to V## since by definition ##T(U_0)## maps into ##V_0## but not for ##U## in general.
And then ##\text{dim Ker }T + \text{dim Ran }T = \text{dim }U_0##. I don't get how we're allowed to only consider ##U_0## instead of ##U##.
The author is working with the restrictions, which have the domains ##U_0## and ##V_0##, respectively, and then there is no problem, right?
 
  • Like
Likes Incand
  • #3
Erland said:
The author actually takes the restriction of ##S## to ##V_0## and the restriction of ##T## to ##U_0##.
I don't understand this, perhaps you could write out the transformation if what I wrote was wrong. ##S## is a mapping from ##V## to ##W## and ##T## a mapping from ##U## to ##V## then ##ST: U_0 \to V_0 \to 0## in my mind. ##S## can't map into ##V_0## since ##V_0## may not be a subspace of ##W##.
 
  • #4
You got this essentially correct, just that ##V_0## is a subspace of ##V##, not of ##W##. And kernels are of course subspaces of the spaces they lie in.

Edit: (The restriction of) ##S## maps ##V_0## into ##0##. ##ST## means that ##T## is applied first, then ##S##.
 
Last edited:
  • Like
Likes Incand
  • #5
Erland said:
You got this essentially correct, just that ##V_0## is a subspace of ##V##, not of ##W##. And kernels are of course subspaces of the spaces they lie in.

Edit: (The restriction of) ##S## maps ##V_0## into ##0##. ##ST## means that ##T## is applied first, then ##S##.
Alright I understand that part. I think I finally also understand why it's equivalent to make the proof for the "new" ##T##. Since the theorem involve only kernel of the operators (and not every other vector) it's equivalent to prove the theorem for an operator mapping from only the kernel's. That the steps in the proof ain't actually true for a general operator ##S## and ##T## doesn't matter since we already know it's equivalent to show the theorem for the new ##T## and ##S##.

It seems straightforward now, thanks! The problem I had was pretty much I felt that the most crucial step that it's equivalent to prove it for the "new" operators wasn't written out. I think if I written the proof myself would've emphasized that since for me it's was really the key step in the proof.
 
  • #6
Yes, I think the proof would be more pedagogical if one introduces and uses the restriction ##T_0## of ##T## to ##U_0## with codomain ##V_0##, and the restriction ##S_0## of ##S## to ##V_0## with codomain ##0##.
 
  • Like
Likes Incand
  • #7
If a vector x goes to zero under a composition ST then T(x) must go to zero under S, so ker(ST) is contained in the pullback of the kernel of S under T. But pulling back a subspace increases the dimension by at most that of the kernel, so the pullback of ker(S) has dimension ≤ kerS + kerT. Since this pullback contains ker(ST) we have the inequality. I did this in my head, but I think it is probably right. I was trying to think of an argument with some geometric motivation.
 
  • Like
Likes Incand
  • #8
That's some nice reasoning mathwonk. I be sure to remember to take a closer look at it in the future when I have a bit better math knowledge.
 

1. What is the theorem about the dimension of kernel?

The theorem about the dimension of kernel, also known as the rank-nullity theorem, states that for a linear transformation between two finite-dimensional vector spaces, the dimension of the kernel (nullity) plus the dimension of the range (rank) equals the dimension of the domain.

2. Why is understanding proof for this theorem important?

Understanding the proof for this theorem is important because it provides a deeper understanding of linear transformations and their properties. It also helps in solving problems related to vector spaces and linear algebra.

3. What are the key concepts involved in the proof for this theorem?

The key concepts involved in the proof for this theorem are: linear transformations, vector spaces, dimension, basis, and subspace. The proof uses these concepts to show how the dimensions of the kernel and range are related.

4. How does the proof for this theorem relate to real-world applications?

The proof for this theorem has many real-world applications, especially in fields such as engineering, physics, and computer science. It can be used to solve problems involving systems of linear equations, data compression, and signal processing.

5. What are some common mistakes made when trying to understand the proof for this theorem?

Some common mistakes when trying to understand the proof for this theorem include: confusing the dimensions of the kernel and range, not understanding the concept of linear transformations, and not being familiar with the properties of vector spaces. It is important to have a solid understanding of these concepts before attempting to understand the proof.

Similar threads

  • Linear and Abstract Algebra
Replies
2
Views
843
  • Linear and Abstract Algebra
Replies
4
Views
992
  • Linear and Abstract Algebra
Replies
24
Views
3K
  • Linear and Abstract Algebra
Replies
3
Views
1K
  • Calculus and Beyond Homework Help
Replies
1
Views
615
Replies
3
Views
2K
  • Linear and Abstract Algebra
Replies
4
Views
1K
  • Calculus and Beyond Homework Help
Replies
14
Views
603
  • Linear and Abstract Algebra
Replies
5
Views
2K
  • Linear and Abstract Algebra
Replies
5
Views
1K
Back
Top