# Dimension statement about (finite-dimensional) subspaces

• JD_PM
In summary, the author is explaining how to prove that a set of vectors is linearly independent. They begin by defining a basis for the space and show that this basis is linearly independent. They then ask the reader to forget about the space of three dimensions and focus on the two-dimensional space. They show that a vector is in this space if and only if it is equal to the sum of the vectors in the two-dimensional space.f

#### JD_PM

Homework Statement
True or false question? (Prove or give a counterexample)

Let ##V## be a real finite vector space and ##U_1, U_2## and ##U_3## be subspaces (of ##V##) with ##U_1 \cap U_2 = \{0\}##. Then the following statement holds

$$\dim (U_1 \cap U_3) + \dim (U_2 \cap U_3) \leq \dim (U_3)$$
Relevant Equations
N/A
My intuition tells me this is a true statements so let's try to prove it.

The dimension is defined as the number of elements of a basis. Hence, we can work in terms of basis to prove the statement.

Given that ##U_3## appears on both sides of the inequality, let's get a basis for it. How? Let's suppose that ##\{u_1, \dots, u_m \}## is a basis for ##U_1 \cap U_3##. By definition of intersection, ##\{u_1, \dots, u_m \} \in U_3##. The following is a well-known theorem in linear algebra: given a linearly independent list of vectors in a finite dimensional vector space (another one is that given a finite-dimensional vector space, any subspace of it is also finite dimensional), it is always possible to extend it to a basis of the vector space. Hence, a basis for ##U_3## is

$$\beta_{U_3} = \{u_1, \dots, u_m, w_1, \dots, w_j \}$$

But I do not really see how can we conclude the proof. I guess we still need to argue/prove why ##\{w_1, \dots, w_j \}## is a basis for ##U_2 \cap U_3##

Your guidance is appreciated, thanks! I would choose a basis for ##U_1\cap U_3## and a basis for ##U_2\cap U_3## and show that they are linearly independent.

This way you can see where ##U_1\cap U_2 =\{0\}## is needed, and the conclusion then is trivial.

If you start with a basis of ##U_3## you have no control over the subspaces.

• JD_PM and PeroK
That's true.

I would choose a basis for ##U_1\cap U_3## and a basis for ##U_2\cap U_3## and show that they are linearly independent.

This way you can see where ##U_1\cap U_2 =\{0\}## is needed, and the conclusion then is trivial.

If you start with a basis of ##U_3## you have no control over the subspaces.

I am a bit stuck, let me share.

Let ##\{u_1, \dots, u_m \}## be a basis for ##U_1 \cap U_3## and ##\{w_1, \dots, w_j \}## a basis for ##U_2 \cap U_3##. We want to prove that ##\{u_1, \dots, u_m, w_1, \dots, w_j \}## is linearly independent.

Suppose that

$$a_1 u_1 + \dots a_m u_m + b_1 w_1 + \dots + b_j w_j = 0$$

Rearranging we see that ##b_1 w_1 + \dots + b_j w_j = -a_1 u_1 - \dots -a_m u_m## so ##b_1 w_1 + \dots + b_j w_j \in U_1\cap U_3##. Besides, ##\{w_1, \dots, w_j \}## is a basis for ##U_2 \cap U_3## and ##U_1\cap U_2 =\{0\}## so what we conclude of this reasoning is that ##w_1, \dots w_j \in U_3##.

Analogously, we can rearrange as ##a_1 u_1 + \dots +a_m u_m = -b_1 w_1 - \dots - b_j w_j## to see that ##u_1, \dots u_m \in U_3##.

But I do not see how this will lead us to conclude that ##a_i = 0## and ##b_i = 0##

I am a bit stuck, let me share.

Let ##\{u_1, \dots, u_m \}## be a basis for ##U_1 \cap U_3## and ##\{w_1, \dots, w_j \}## a basis for ##U_2 \cap U_3##. We want to prove that ##\{u_1, \dots, u_m, w_1, \dots, w_j \}## is linearly independent.

Suppose that

$$a_1 u_1 + \dots a_m u_m + b_1 w_1 + \dots + b_j w_j = 0$$

Rearranging we see that ##b_1 w_1 + \dots + b_j w_j = -a_1 u_1 - \dots -a_m u_m## so ##b_1 w_1 + \dots + b_j w_j \in U_1\cap U_3##. Besides, ##\{w_1, \dots, w_j \}## is a basis for ##U_2 \cap U_3## and ##U_1\cap U_2 =\{0\}## so what we conclude of this reasoning is that ##w_1, \dots w_j \in U_3##.

Analogously, we can rearrange as ##a_1 u_1 + \dots +a_m u_m = -b_1 w_1 - \dots - b_j w_j## to see that ##u_1, \dots u_m \in U_3##.

But I do not see how this will lead us to conclude that ##a_i = 0## and ##b_i = 0##
Forget about ##U_3##. You have shown that there is a vector ##v##
$$U_1 \supseteq U_1\cap U_3 \ni v:=a_1 u_1 + \dots +a_m u_m = -b_1 w_1 - \dots - b_j w_j \in U_2\cap U_3\subseteq U_2$$
So ##v\in U_1\cap U_2=\{0\}.## What does this mean for the ##a_i## and ##b_j##? Here we need that the ##u_i## and the ##w_j## are lineraly independent.

• JD_PM
Forget about ##U_3##. You have shown that there is a vector ##v##
$$U_1 \supseteq U_1\cap U_3 \ni v:=a_1 u_1 + \dots +a_m u_m = -b_1 w_1 - \dots - b_j w_j \in U_2\cap U_3\subseteq U_2$$
So ##v\in U_1\cap U_2=\{0\}.## What does this mean for the ##a_i## and ##b_j##? Here we need that the ##u_i## and the ##w_j## are lineraly independent.

Oh so we have shown that ##v \in U_1\cap U_2## and we are given that ##U_1\cap U_2 = \{0\}##. We know that the ##u_i## and the ##w_j## are linearly independent so the only way that ##v=0## is for ##a_i = 0## and ##b_j = 0## and we are done (?).

Oh so we have shown that ##v \in U_1\cap U_2## and we are given that ##U_1\cap U_2 = \{0\}##. We know that the ##u_i## and the ##w_j## are linearly independent so the only way that ##v=0## is for ##a_i = 0## and ##b_j = 0## and we are done (?).
Almost. Now you know that there are ##m+j## many linearly independent vectors, the ##u_i## and the ##w_i##. But everything takes place in ##U_3## and there can only be ##\dim U_3## many of them.

Alternatively, a proof by contradiction should be fairly simple. If the sum of the dimensions of two subspaces is greater than the dimension of the space, then the subspaces must have a vector in common.

• JD_PM
Almost. Now you know that there are ##m+j## many linearly independent vectors, the ##u_i## and the ##w_i##. But everything takes place in ##U_3## and there can only be ##\dim U_3## many of them.

So from this argument we see that ##\dim (U_1 \cap U_3) + \dim (U_2 \cap U_3) = m+j \leq \dim U_3 ##

Alternatively, a proof by contradiction should be fairly simple. If the sum of the dimensions of two subspaces is greater than the dimension of the space, then the subspaces must have a vector in common.

Might you please elaborate further? So that I see whether is more simple or not (to me at least).

Might you please elaborate further? So that I see whether is more simple or not (to me at least).
The outline argument is that if you combine the subspace bases you have too many vectors for linear independence. And linear dependence leads to a common non-zero vector.

More generally, you need to become familiar with the relationship between a direct proof and a proof of the same thing by contradiction. This is back to the issue of basic mathematical techniques.

Might you please elaborate further? So that I see whether is more simple or not (to me at least).
This would be a good example to practice. The arguments are all similar. Start with the contrary statement:
$$\dim (U_1\cap U_3) + \dim (U_2\cap U_3) > \dim U_3$$

The left-hand side is part of ##U_3##. Now if the sum of the dimensions is greater than the dimension of the surrounding space, what does that mean? Don't jump to conclusions, try to take one step after the other.

• JD_PM

Almost. Now you know that there are ##m+j## many linearly independent vectors, the ##u_i## and the ##w_i##. But everything takes place in ##U_3## and there can only be ##\dim U_3## many of them.

By "everything takes place in ##U_3##" did you mean that the ##u_i## and the ##w_i## linearly independent vectors are in ##U_3##?

Now if the sum of the dimensions is greater than the dimension of the surrounding space, what does that mean? Don't jump to conclusions, try to take one step after the other.

Alright, let's go step by step.

This means to me that the sum is no longer ##\subseteq U_3##

No. It means that something had to be counted at least twice.

We have ##(U_1\cap U_3) + (U_2\cap U_3) \subseteq U_3##, which is why I said "everything takes place in ##U_3##.

If the sum of dimensions on the left is bigger than the number of maximal available linearly independent vectors in ##U_3##, then ##\{u_i\}\cup \{w_k\}## cannot be linearly independent. Hence we have a linear combination such that ...

(I'm not sure whether this is what @PeroK had in mind, but I try to run through it step by step.)

• JD_PM
If the sum of dimensions on the left is bigger than the number of maximal available linearly independent vectors in ##U_3##, then ##\{u_i\}\cup \{w_k\}## cannot be linearly independent.

Hence we have a linear combination such that any ##x \in \{u_i\}\cup \{w_k\}## can be written as a linear combination of previous elements in ##\{u_i\}\cup \{w_k\}## (linear dependence lemma).

However, I do not see where you are driving me at...

However, I do not see where you are driving me at...
I don't know either. I simply took the contrary condition and now I'm looking where we end up.

We have a non-trivial linear combination of zero, i.e. ##0=\sum_{i=1}^m a_iu_i +\sum_{k=1}^j b_kw_k##. Now at least one of the coefficients ##a_i## or ##b_k## is unequal zero, say ##a_1\neq 0## what we may assume, since otherwise we only change enumeration. Thus
$$0\neq u_1+\sum_{i=2}^m a_1^{-1}a_iu_i = - \sum_{k=1}^j b_ka_1^{-1}w_k \in U_1\cap U_2 =\{0\}$$

As I said, these are the same arguments written differently.