Linear Algebra with Proof by Contradiction

Devil Moo
Messages
44
Reaction score
1
This is a linear algebra question which I am confused.

1. Homework Statement


Prove that "if the union of two subspaces of ##V## is a subspace of ##V##, then one of the subspaces is contained in the other".

The Attempt at a Solution



Suppose ##U##, ##W## are subspaces of ##V##. ##U \cup W## is a subspace of ##V##. (statement A)

Suppose ##U## is not contained in ##W## and vice versa. (statement B)

Let ##u \in U, \not\in W## and ##w \in W, \not\in U##.

##u \in U \cup W, w \in U \cup W##
##u + w \in U \cup W##
##u + w \in U## or ##u + w \in W##

Suppose ##u + w \in U## (statement C)

##u + w + (-u) \in U##
##w \in U##
It leads to contradiction.

Which one does it conclude? (if A is true, then B is false) or (if A and B are true, then C is false)
 
Physics news on Phys.org
It's the second one: (if A and B are true, then C is false)

Since you have proven C false, you can conclude that ##u+v\in W##. By similar steps to before you can also conclude that that is false.

You can then, with a few more steps, conclude that at least one of A or B must be false.

Arguments of this kind are much easier to understand and control if you grasp the notion of Conditional Proof or sub-proof. Every time you make a new assumption, you are opening a new sub-proof. When you reach a contradiction, you close that sub-proof, concluding the opposite of the assumption that you used to open it. It is not unusual to have several nested levels of proof. In your case there are three levels.
 
For the proof by contradiction about ##\sqrt 2## is a irrational number, we conclude that it is true once we find the contradiction.
In this case, why can't I conclude that ##u + v \in W## is true?

Is ##not (u + v \in U) = u + v \in W## wrong?
 
Last edited:
Devil Moo said:
In this case, why can't I conclude that ##u + v \in W## is true?
You can. Read the second line of my post again.

Then you go on to prove that it is also false, which is a second contradiction, which tells you that either A or B must be false.
 
There are two things I don't understand about this problem. First, when finding the nth root of a number, there should in theory be n solutions. However, the formula produces n+1 roots. Here is how. The first root is simply ##\left(r\right)^{\left(\frac{1}{n}\right)}##. Then you multiply this first root by n additional expressions given by the formula, as you go through k=0,1,...n-1. So you end up with n+1 roots, which cannot be correct. Let me illustrate what I mean. For this...
Back
Top