Linear Algebra: Subspace proof

lockedup
Messages
67
Reaction score
0
1. Homework Statement :

Prove: A set U \subset V = (V, \oplus, \odot) is a vector subspace of V if and only if (\forallu1, u2 \in U) (1/2 \odot (u1 \oplus u2) \in U) and (\forallu \in U) (\forallt \in \mathbb{R}) (t \odot u \in U).

3. The Attempt at a Solution :

I don't have the first clue. To me, it seems that there is missing information. I know that for a subspace, it is sufficient to prove only closure under addition and scalar multiplication. Maybe he's defining a different sort of addition? Ugh, the whole proof thing is actually pretty new to me. I only started doing simple proofs last semester in Discrete Mathematics...
 
Last edited:
Physics news on Phys.org
Maybe he's defining a different sort of addition?

He's using \oplus and \odot as representatives of addition and multiplication, respectively, just with other symbols. In this proof, you're working with an arbitrary vector space, so that addition and multiplication may not be defined in the sense that you're used to. In fact, they're both defined arbitrarily, and we can define addition and multiplication however we want to, as long as they satisfy the axioms of a vector space. However, even though the vector space operations are unknown, we do know how multiplication works in R.

In any case, here is what you must prove: for each t in R and for each u in U, t \odot u \in U and for all u_1, u_2 \in U, \, u_1 \oplus u_2 \in U. The first thing you need to prove has already been given to you. All you need to prove is closure under addition. I'll give you a hint, and then you're going to have to think a little bit about what to do. The hint is: 2\in \mathbb{R}. Now you're going to have to use the vector space axioms and the first given to figure out why closure under addition holds.
 
Last edited:
Thank you. Yeah, the next problem defines u \oplus v (u, v in R3) as v X u. This one is simple in that v X u is not commutative and therefore R3 with this definition is not a vector space.
 
There are two things I don't understand about this problem. First, when finding the nth root of a number, there should in theory be n solutions. However, the formula produces n+1 roots. Here is how. The first root is simply ##\left(r\right)^{\left(\frac{1}{n}\right)}##. Then you multiply this first root by n additional expressions given by the formula, as you go through k=0,1,...n-1. So you end up with n+1 roots, which cannot be correct. Let me illustrate what I mean. For this...
Back
Top