MHB Universal Mapping Property of a Direct Sum - Knapp Pages 60-61

Math Amateur
Gold Member
MHB
Messages
3,920
Reaction score
48
I am reading Chapter 2: Vector Spaces over $$\mathbb{Q}, \mathbb{R} \text{ and } \mathbb{C}$$ of Anthony W. Knapp's book, Basic Algebra.

I need some help with some issues regarding the Universal Mapping Property of direct sums of vector spaces as dealt with by Knapp of pages 60-61. I am not quite sure what Knapp is "getting at" or meaning in introducing the idea of the Universal Mapping Property (UMP) ... ...

Writing on the UMP for direct sums of vector spaces (pages 60-61), Knapp writes:

https://www.physicsforums.com/attachments/2926
View attachment 2927In the above text on the UMP, Knapp defines $$U, V$$ as vector spaces over $$\mathbb{F}$$ and let's $$L_1$$ and $$L_2$$ be linear maps as follows:

$$L_1 : \ U \to V_1$$ and $$L_2 : \ U \to V_2$$

He then says that we can define a map $$L : \ U \to V$$ as follows:

$$L(u) = (i_1L_1 + i_2L_2) (u) = (L_1(u), L_2(u))
$$

He then says that "we can recover $$L_1$$ and $$L_2$$ from $$L_1 = p_1L$$ and $$L_2 = p_2L$$"My question is how exactly does this "recovery" work and then (more importantly) what has this got to do with any universal mapping property of direct sums?I suspect that maybe (?) the "recovery" of $$L_1$$ works something like this ...

$$L(u) = v = v_1 + v_2$$ where $$v_1 \in V_1$$ and $$v_2 \in V_2$$

Then it would follow that ...

$$L_1(u) = p_1L(u) = p_1(v) = p_1(v_1 + v_2) = p_1(v_1, v_2) = v_1
$$But ... firstly ... is this what is meant by "recovering" $$L_1$$ from L ... doesn't seem so ... so what is meant by it?Secondly, in the above how would one justify writing $$p_1(v_1 + v_2) = p_1(v_1, v_2)$$ ... Mind you, I am somewhat confused and would appreciate help generally on the topic of the UMP for direct sums of vector spaces ...

Peter
 
Physics news on Phys.org
The formal statement of the Universal Mapping Property is this:

If $U$ is ANY other vector space (over the same field, of course) with ANY OTHER pair of linear maps:

$L_1:U \to V_1$
$L_2:U \to V_2$

then there is a UNIQUE linear map: $L: U \to V_1 \oplus V_2$ such that:

$p_1\circ L = L_1$
$p_2\circ L = L_2$.

This map is often written as $L_1 + L_2$, or $L_1 \oplus L_2$ or even as $L_1 \times L_2$.

********

To see $V_1 \oplus V_2$ actually posseses this property, suppose that $U,L_1,L_2$ are given.

Define $L(u) = (L_1(u),L_2(u))$. It is straight-forward to verify this $L$ is linear, and:

$(p_1 \circ L)(u) = p_1(L_1(u),L_2(u)) = L_1(u)$, for all $u \in U$. A similar statement holds for $p_2$.

Moreover, if $L'$ is any other linear map $U \to V_1 \oplus V_2$ satisfying the UMP, we have:

$p_1(L(u) - L'(u)) = p_1(L(u)) - p_1(L'(u)) = p_1(u) - p_1(u) = 0$.

Hence $L(u) - L'(u) \in \text{ker }p_1$, so $L(u) - L'(u) \in \{0\}\oplus V_2$.

Similarly, $L(u) - L'(u) \in V_1 \oplus \{0\}$.

Thus $L(u) - L'(u) \in \{0\} \oplus \{0\} = 0_{V_1 \oplus V_2}$, for ALL $u \in U$, that is to say:

$L - L'$ is the 0-map. Hence $L = L'$ (we are leveraging the fact that linear transformations themselves have a vector space structure on identical domains and co-domains).

So we found such a map exists, and is unique.

********

There is a "dual" to the UMP, which basically "reverses the directions of the mapping arrows":

For any other vector space $U$, and any pair of maps:

$L_1:V_1 \to U$
$L_2:V_2 \to U$

there is a unique map $L:V_1\oplus V_2 \to U$ with:

$L \circ i_1 = L_1$
$L \circ i_2 = L_2$

This map is given explicitly by:

$L(v_1,v_2) = L_1(v_1) + L_2(v_2)$.

********

Now, to be fair, one doesn't "need" to talk about this UMP property to discuss the direct sum of two vector spaces, an "element-wise" definition is sufficient for many "practical" applications. But there is something subtle going on, here: we've shifted our focus away from "vectors", and are focusing instead on "linear maps". In other words, we're not concerned with "calculation", but with "behavior". This is a more abstract point of view, and generalizes better to other structures with "different axioms". In particular, this characterization in terms of a UMP is CATEGORICAL, and in fact holds for any category in which the (binary) product and co-product coincide.

********

It often helps to see how this works "in practice". This should be a familiar example:

The real numbers $\Bbb R$ form a field, which is to say, a vector space of dimension 1 over themselves. We can visualize this field as a "number line". Suppose we have two "separate" number lines, and we wish to create a vector space from the pair. We want the individual lines to keep their "vector-space-ness" as subspaces of this new space unimpaired, so we want an injective linear map from each one into our new space.

We also want them to be "independent", so that changes in one line, do not affect values in the other. We accomplish this by "pairing", and stipulate that when we add pairs, its only "each to each":

$(x,y) + (x',y') = (x+x',y+y')$.

The injections we have in mind are THESE ones:

$x \mapsto (x,0)$
$y \mapsto (0,y)$.

It should be clear, then, that what we now call $\Bbb R \oplus \Bbb R$ is simply $\Bbb R^2$, the real plane.

Our two "number lines" have been embedded in the plane as the $x$ and $y$ axis.

Note that merely taking the UNION of said lines, doesn't work, because if both "coordinates" are non-zero, then $(x,y)$ isn't on either axis.

Now we can "reverse" this point of view, and START with the plane, in which case we recover our original lines by PROJECTING on to one axis, or the other. This, in effect, "zeros out" one of the coordinates. At this point, the 0-coordinate is just excess baggage, and we think of the relevant (non-zero coordinate) axis as "a line unto itself".

The key linkage between the "pair" view, and the "sum" view is THIS identity:

$(x,y) = (x,0) + (0,y)$ <---think about this, for a bit.

This only defines the "abelian group structure", can you see a natural way to define "scalar multiplication"?

********

For an idea of how this concept plays out in groups (where it is similar), take a look at:

http://mathhelpboards.com/math-notes-49/universal-property-direct-product-groups-11546.html

You may also want to look at this:

http://mathhelpboards.com/potw-graduate-students-45/problem-week-113-july-28th-2014-a-11541.html

Which uses this property of the direct sum of vector spaces in a fairly sophisticated way.
 
Deveno said:
The formal statement of the Universal Mapping Property is this:

If $U$ is ANY other vector space (over the same field, of course) with ANY OTHER pair of linear maps:

$L_1:U \to V_1$
$L_2:U \to V_2$

then there is a UNIQUE linear map: $L: U \to V_1 \oplus V_2$ such that:

$p_1\circ L = L_1$
$p_2\circ L = L_2$.

This map is often written as $L_1 + L_2$, or $L_1 \oplus L_2$ or even as $L_1 \times L_2$.

********

To see $V_1 \oplus V_2$ actually posseses this property, suppose that $U,L_1,L_2$ are given.

Define $L(u) = (L_1(u),L_2(u))$. It is straight-forward to verify this $L$ is linear, and:

$(p_1 \circ L)(u) = p_1(L_1(u),L_2(u)) = L_1(u)$, for all $u \in U$. A similar statement holds for $p_2$.

Moreover, if $L'$ is any other linear map $U \to V_1 \oplus V_2$ satisfying the UMP, we have:

$p_1(L(u) - L'(u)) = p_1(L(u)) - p_1(L'(u)) = p_1(u) - p_1(u) = 0$.

Hence $L(u) - L'(u) \in \text{ker }p_1$, so $L(u) - L'(u) \in \{0\}\oplus V_2$.

Similarly, $L(u) - L'(u) \in V_1 \oplus \{0\}$.

Thus $L(u) - L'(u) \in \{0\} \oplus \{0\} = 0_{V_1 \oplus V_2}$, for ALL $u \in U$, that is to say:

$L - L'$ is the 0-map. Hence $L = L'$ (we are leveraging the fact that linear transformations themselves have a vector space structure on identical domains and co-domains).

So we found such a map exists, and is unique.

********

There is a "dual" to the UMP, which basically "reverses the directions of the mapping arrows":

For any other vector space $U$, and any pair of maps:

$L_1:V_1 \to U$
$L_2:V_2 \to U$

there is a unique map $L:V_1\oplus V_2 \to U$ with:

$L \circ i_1 = L_1$
$L \circ i_2 = L_2$

This map is given explicitly by:

$L(v_1,v_2) = L_1(v_1) + L_2(v_2)$.

********

Now, to be fair, one doesn't "need" to talk about this UMP property to discuss the direct sum of two vector spaces, an "element-wise" definition is sufficient for many "practical" applications. But there is something subtle going on, here: we've shifted our focus away from "vectors", and are focusing instead on "linear maps". In other words, we're not concerned with "calculation", but with "behavior". This is a more abstract point of view, and generalizes better to other structures with "different axioms". In particular, this characterization in terms of a UMP is CATEGORICAL, and in fact holds for any category in which the (binary) product and co-product coincide.

********

It often helps to see how this works "in practice". This should be a familiar example:

The real numbers $\Bbb R$ form a field, which is to say, a vector space of dimension 1 over themselves. We can visualize this field as a "number line". Suppose we have two "separate" number lines, and we wish to create a vector space from the pair. We want the individual lines to keep their "vector-space-ness" as subspaces of this new space unimpaired, so we want an injective linear map from each one into our new space.

We also want them to be "independent", so that changes in one line, do not affect values in the other. We accomplish this by "pairing", and stipulate that when we add pairs, its only "each to each":

$(x,y) + (x',y') = (x+x',y+y')$.

The injections we have in mind are THESE ones:

$x \mapsto (x,0)$
$y \mapsto (0,y)$.

It should be clear, then, that what we now call $\Bbb R \oplus \Bbb R$ is simply $\Bbb R^2$, the real plane.

Our two "number lines" have been embedded in the plane as the $x$ and $y$ axis.

Note that merely taking the UNION of said lines, doesn't work, because if both "coordinates" are non-zero, then $(x,y)$ isn't on either axis.

Now we can "reverse" this point of view, and START with the plane, in which case we recover our original lines by PROJECTING on to one axis, or the other. This, in effect, "zeros out" one of the coordinates. At this point, the 0-coordinate is just excess baggage, and we think of the relevant (non-zero coordinate) axis as "a line unto itself".

The key linkage between the "pair" view, and the "sum" view is THIS identity:

$(x,y) = (x,0) + (0,y)$ <---think about this, for a bit.

This only defines the "abelian group structure", can you see a natural way to define "scalar multiplication"?

********

For an idea of how this concept plays out in groups (where it is similar), take a look at:

http://mathhelpboards.com/math-notes-49/universal-property-direct-product-groups-11546.html

You may also want to look at this:

http://mathhelpboards.com/potw-graduate-students-45/problem-week-113-july-28th-2014-a-11541.html

Which uses this property of the direct sum of vector spaces in a fairly sophisticated way.

Thank you for this post, Deveno ... it is a very important post for me since I have struggled to get a real sense of, and indeed, full understanding of the UMP for direct products ...

So ... I am now working through your post very carefully ... thanks again,

Peter
 
Deveno said:
The formal statement of the Universal Mapping Property is this:

If $U$ is ANY other vector space (over the same field, of course) with ANY OTHER pair of linear maps:

$L_1:U \to V_1$
$L_2:U \to V_2$

then there is a UNIQUE linear map: $L: U \to V_1 \oplus V_2$ such that:

$p_1\circ L = L_1$
$p_2\circ L = L_2$.

This map is often written as $L_1 + L_2$, or $L_1 \oplus L_2$ or even as $L_1 \times L_2$.

********

To see $V_1 \oplus V_2$ actually posseses this property, suppose that $U,L_1,L_2$ are given.

Define $L(u) = (L_1(u),L_2(u))$. It is straight-forward to verify this $L$ is linear, and:

$(p_1 \circ L)(u) = p_1(L_1(u),L_2(u)) = L_1(u)$, for all $u \in U$. A similar statement holds for $p_2$.

Moreover, if $L'$ is any other linear map $U \to V_1 \oplus V_2$ satisfying the UMP, we have:

$p_1(L(u) - L'(u)) = p_1(L(u)) - p_1(L'(u)) = p_1(u) - p_1(u) = 0$.

Hence $L(u) - L'(u) \in \text{ker }p_1$, so $L(u) - L'(u) \in \{0\}\oplus V_2$.

Similarly, $L(u) - L'(u) \in V_1 \oplus \{0\}$.

Thus $L(u) - L'(u) \in \{0\} \oplus \{0\} = 0_{V_1 \oplus V_2}$, for ALL $u \in U$, that is to say:

$L - L'$ is the 0-map. Hence $L = L'$ (we are leveraging the fact that linear transformations themselves have a vector space structure on identical domains and co-domains).

So we found such a map exists, and is unique.

********

There is a "dual" to the UMP, which basically "reverses the directions of the mapping arrows":

For any other vector space $U$, and any pair of maps:

$L_1:V_1 \to U$
$L_2:V_2 \to U$

there is a unique map $L:V_1\oplus V_2 \to U$ with:

$L \circ i_1 = L_1$
$L \circ i_2 = L_2$

This map is given explicitly by:

$L(v_1,v_2) = L_1(v_1) + L_2(v_2)$.

********

Now, to be fair, one doesn't "need" to talk about this UMP property to discuss the direct sum of two vector spaces, an "element-wise" definition is sufficient for many "practical" applications. But there is something subtle going on, here: we've shifted our focus away from "vectors", and are focusing instead on "linear maps". In other words, we're not concerned with "calculation", but with "behavior". This is a more abstract point of view, and generalizes better to other structures with "different axioms". In particular, this characterization in terms of a UMP is CATEGORICAL, and in fact holds for any category in which the (binary) product and co-product coincide.

********

It often helps to see how this works "in practice". This should be a familiar example:

The real numbers $\Bbb R$ form a field, which is to say, a vector space of dimension 1 over themselves. We can visualize this field as a "number line". Suppose we have two "separate" number lines, and we wish to create a vector space from the pair. We want the individual lines to keep their "vector-space-ness" as subspaces of this new space unimpaired, so we want an injective linear map from each one into our new space.

We also want them to be "independent", so that changes in one line, do not affect values in the other. We accomplish this by "pairing", and stipulate that when we add pairs, its only "each to each":

$(x,y) + (x',y') = (x+x',y+y')$.

The injections we have in mind are THESE ones:

$x \mapsto (x,0)$
$y \mapsto (0,y)$.

It should be clear, then, that what we now call $\Bbb R \oplus \Bbb R$ is simply $\Bbb R^2$, the real plane.

Our two "number lines" have been embedded in the plane as the $x$ and $y$ axis.

Note that merely taking the UNION of said lines, doesn't work, because if both "coordinates" are non-zero, then $(x,y)$ isn't on either axis.

Now we can "reverse" this point of view, and START with the plane, in which case we recover our original lines by PROJECTING on to one axis, or the other. This, in effect, "zeros out" one of the coordinates. At this point, the 0-coordinate is just excess baggage, and we think of the relevant (non-zero coordinate) axis as "a line unto itself".

The key linkage between the "pair" view, and the "sum" view is THIS identity:

$(x,y) = (x,0) + (0,y)$ <---think about this, for a bit.

This only defines the "abelian group structure", can you see a natural way to define "scalar multiplication"?

********

For an idea of how this concept plays out in groups (where it is similar), take a look at:

http://mathhelpboards.com/math-notes-49/universal-property-direct-product-groups-11546.html

You may also want to look at this:

http://mathhelpboards.com/potw-graduate-students-45/problem-week-113-july-28th-2014-a-11541.html

Which uses this property of the direct sum of vector spaces in a fairly sophisticated way.

Hi Deveno,

I just had a quick look at the Problem of the Week you mentioned. In Opalg's answer he writes:

" ... ... The key fact here is that if [FONT=MathJax_Math-italic-Web]Y is a [FONT=MathJax_Math-italic-Web]K -vector subspace of [FONT=MathJax_Math-italic-Web]X then there exists a [FONT=MathJax_Math-italic-Web]K -vector subspace [FONT=MathJax_Math-italic-Web]Z of [FONT=MathJax_Math-italic-Web]X such that [FONT=MathJax_Math-italic-Web]X is canonically isomorphic to the direct sum[FONT=MathJax_Math-italic-Web] Y[FONT=MathJax_Main-Web]⊕[FONT=MathJax_Math-italic-Web]Z ... ... etc "

Can you please explain what is meant by the term "canonically isomorphic"?

Peter
 
Deveno said:
The formal statement of the Universal Mapping Property is this:

If $U$ is ANY other vector space (over the same field, of course) with ANY OTHER pair of linear maps:

$L_1:U \to V_1$
$L_2:U \to V_2$

then there is a UNIQUE linear map: $L: U \to V_1 \oplus V_2$ such that:

$p_1\circ L = L_1$
$p_2\circ L = L_2$.

This map is often written as $L_1 + L_2$, or $L_1 \oplus L_2$ or even as $L_1 \times L_2$.

********

To see $V_1 \oplus V_2$ actually posseses this property, suppose that $U,L_1,L_2$ are given.

Define $L(u) = (L_1(u),L_2(u))$. It is straight-forward to verify this $L$ is linear, and:

$(p_1 \circ L)(u) = p_1(L_1(u),L_2(u)) = L_1(u)$, for all $u \in U$. A similar statement holds for $p_2$.

Moreover, if $L'$ is any other linear map $U \to V_1 \oplus V_2$ satisfying the UMP, we have:

$p_1(L(u) - L'(u)) = p_1(L(u)) - p_1(L'(u)) = p_1(u) - p_1(u) = 0$.

Hence $L(u) - L'(u) \in \text{ker }p_1$, so $L(u) - L'(u) \in \{0\}\oplus V_2$.

Similarly, $L(u) - L'(u) \in V_1 \oplus \{0\}$.

Thus $L(u) - L'(u) \in \{0\} \oplus \{0\} = 0_{V_1 \oplus V_2}$, for ALL $u \in U$, that is to say:

$L - L'$ is the 0-map. Hence $L = L'$ (we are leveraging the fact that linear transformations themselves have a vector space structure on identical domains and co-domains).

So we found such a map exists, and is unique.

********

There is a "dual" to the UMP, which basically "reverses the directions of the mapping arrows":

For any other vector space $U$, and any pair of maps:

$L_1:V_1 \to U$
$L_2:V_2 \to U$

there is a unique map $L:V_1\oplus V_2 \to U$ with:

$L \circ i_1 = L_1$
$L \circ i_2 = L_2$

This map is given explicitly by:

$L(v_1,v_2) = L_1(v_1) + L_2(v_2)$.

********

Now, to be fair, one doesn't "need" to talk about this UMP property to discuss the direct sum of two vector spaces, an "element-wise" definition is sufficient for many "practical" applications. But there is something subtle going on, here: we've shifted our focus away from "vectors", and are focusing instead on "linear maps". In other words, we're not concerned with "calculation", but with "behavior". This is a more abstract point of view, and generalizes better to other structures with "different axioms". In particular, this characterization in terms of a UMP is CATEGORICAL, and in fact holds for any category in which the (binary) product and co-product coincide.

********

It often helps to see how this works "in practice". This should be a familiar example:

The real numbers $\Bbb R$ form a field, which is to say, a vector space of dimension 1 over themselves. We can visualize this field as a "number line". Suppose we have two "separate" number lines, and we wish to create a vector space from the pair. We want the individual lines to keep their "vector-space-ness" as subspaces of this new space unimpaired, so we want an injective linear map from each one into our new space.

We also want them to be "independent", so that changes in one line, do not affect values in the other. We accomplish this by "pairing", and stipulate that when we add pairs, its only "each to each":

$(x,y) + (x',y') = (x+x',y+y')$.

The injections we have in mind are THESE ones:

$x \mapsto (x,0)$
$y \mapsto (0,y)$.

It should be clear, then, that what we now call $\Bbb R \oplus \Bbb R$ is simply $\Bbb R^2$, the real plane.

Our two "number lines" have been embedded in the plane as the $x$ and $y$ axis.

Note that merely taking the UNION of said lines, doesn't work, because if both "coordinates" are non-zero, then $(x,y)$ isn't on either axis.

Now we can "reverse" this point of view, and START with the plane, in which case we recover our original lines by PROJECTING on to one axis, or the other. This, in effect, "zeros out" one of the coordinates. At this point, the 0-coordinate is just excess baggage, and we think of the relevant (non-zero coordinate) axis as "a line unto itself".

The key linkage between the "pair" view, and the "sum" view is THIS identity:

$(x,y) = (x,0) + (0,y)$ <---think about this, for a bit.

This only defines the "abelian group structure", can you see a natural way to define "scalar multiplication"?

********

For an idea of how this concept plays out in groups (where it is similar), take a look at:

http://mathhelpboards.com/math-notes-49/universal-property-direct-product-groups-11546.html

You may also want to look at this:

http://mathhelpboards.com/potw-graduate-students-45/problem-week-113-july-28th-2014-a-11541.html

Which uses this property of the direct sum of vector spaces in a fairly sophisticated way.
Hi Deveno,

Thanks for the extensive help ... but ... just a clarification:

You write:

" ... ... Moreover, if $L'$ is any other linear map $U \to V_1 \oplus V_2$ satisfying the UMP, we have:

$p_1(L(u) - L'(u)) = p_1(L(u)) - p_1(L'(u)) = p_1(u) - p_1(u) = 0$. ... ... ... "I cannot follow exactly why $$p_1(L(u)) - p_1(L'(u)) = p_1(u) - p_1(u)$$ ...

Could you please explain why this follows?

Peter***EDIT***

I have done some more reflecting on $p_1(L(u) - L'(u)) = p_1(L(u)) - p_1(L'(u)) = p_1(u) - p_1(u) = 0$ ... ... and think I see why this is the case ...

Here is my thinking ... ...

We have ...

$$L(u) = ( L_1(u), L_2(u) ) = (v_1, v_2) $$

where $$u \in U, v_1 \in V_1$$ and $$v_2 \in V_2$$

and, also, we have ...

$$L'(u) = ( L_1(u), L_2(u) ) = (v_1, v_2) $$ and that is, to be clear, the same point $$(v_1, v_2) $$as for $$L(u)$$ ...

so that $$L(u) = L'(u) = (v_1, v_2) $$ since the functional values of $$L, L'$$ depend on the values of the same co-ordinate functions $$L_1, L_2$$

Therefore, $$p_1(L(u)) = p_1(L'(u)) = v_1$$

and so $$p_1(L(u)) - p_1(L'(u)) = 0$$ as you say.

Can you confirm that my thinking is correct?

Peter
 
Last edited:
Yes, if $p_1(L(u)) = L_1(u)$ then the "first coordinate" of $L(u)$ is $L_1(u)$.

Similarly the "second coordinate" of $L(u)$ is $L_2(u)$.

But that completely specifies $L$!

This is one of those things that "seems hard" until you realize what is being said:

If we know the first coordinate function, and second coordinate function of a function whose image has two coordinates, we know the function. Because all we are doing with the direct sum is "pairing coordinates".

So basically, we're making a bigger (algebraic thing) by putting two (algebraic things) "side by side" (in parallel, so to speak).

The two "$p$" functions just peel off one side or the other. We do this "externally" by "chopping off", and "internally" by "zeroing out". The two ways of doing this have a one-to-one correspondence (we have some isomorphism, which isn't very hard to find).

The condition $V_1 \cap V_2 = \{0\}$ ensures we can "cut cleanly".

Let's look at a space where $V = V_1 + V_2$ but not $V_1 \oplus V_2$.

Consider $V = \Bbb R^3$, where $V_1 = \{(x,y,0): ,x,y \in \Bbb R\}$ and $V_2 = \{(x,0,z): x,z \in \Bbb R\}$, that is, the $xy$-plane, and the $xz$-plane.

We see that $V = V_1 + V_2$, since for any $(a,b,c) \in V$, we have:

$(a,b,c) = (2a,b,0) + (-a,0,c)$ and $(2a,b,0) \in V_1$, and $(-a,0,c) \in V_2$.

Now $V_1 \cap V_2 = \{(x,0,0): x \in \Bbb R\}$, the $x$-axis. Now, we CAN'T "chop cleanly" because if we try to separate $V_1$ from $V_1 + V_2$, we "cripple" $V_2$, since we're taking out part of it when we remove $V_1$.

We could still form the quotient, $V/V_1$, but the resulting space is a line (whose "points" are parallel planes), and we can't form a (linear) bijection from a line to a plane, so it's definitely NOT isomorphic to $V_2$.

This is sort of like the difference between dividing 4*3 by 4 and dividing 2*6 by 4. In the first case, dividing by 4 leaves 3 untouched. In the second case, neither factor escapes unscathed.

So the utility of the direct sum in vector spaces is this: If we do something to "the parent space" (which may be large, and complicated), we can do the same thing to the "baby spaces" (the factors in the direct sum) and then just "add the results".

In fact, this is just what we do with a BASIS. If we have a basis $B = \{b_1,\dots,b_n\}$ for a vector space $V$, we have:

$V = \langle b_1\rangle \oplus \cdots \oplus \langle b_n\rangle$ which allows us to represent an element of $V$ as:

$v = (\alpha_1,\dots,\alpha_n)$ IN THAT BASIS.

Each of the subspaces $\langle b_j\rangle$ is very simple: it's just a field! (one has to be a "tiny" bit careful, here, the field multiplication in $\langle b_j\rangle$ may differ from another multiplication we have defined in $V$).

This means we can discover everything about $V$ (as a vector space, it may have additional structure) just by looking at a set of one-dimensional linearly independent subspaces that sum to $V$. This makes everything "easy".

To make things "even easier", we often choose the element of $\langle b_j \rangle$ that corresponds to the IDENTITY of the field. Such animals are called UNIT vectors. This often has the effect of "making the basis invisible", for example, in the basis:

$\{(1,0,0),(0,1,0),(0,0,1)\}$

the vector $(x,y,z) \in \Bbb R^3$ has representation: $(x,y,z)$.

Or, in the basis $\{1,x,...,x^n\}$ the vector $c_0 + c_1x + \cdots + c_nx^n \in P_n$ has representation:

$(c_0,c_1,\dots,c_n)$.

The fact that vector spaces admit a direct sum decompostion like this, makes them very nice to work with. We can, if we choose, forget about "typical vectors" and focus just on "basis vectors". One sees this a LOT in multi-variate calculus, where one reduces a problem involving several variables to several problems each involving ONE variable, where we understand the situation much more clearly.
 
Thread 'Determine whether ##125## is a unit in ##\mathbb{Z_471}##'
This is the question, I understand the concept, in ##\mathbb{Z_n}## an element is a is a unit if and only if gcd( a,n) =1. My understanding of backwards substitution, ... i have using Euclidean algorithm, ##471 = 3⋅121 + 108## ##121 = 1⋅108 + 13## ##108 =8⋅13+4## ##13=3⋅4+1## ##4=4⋅1+0## using back-substitution, ##1=13-3⋅4## ##=(121-1⋅108)-3(108-8⋅13)## ... ##= 121-(471-3⋅121)-3⋅471+9⋅121+24⋅121-24(471-3⋅121## ##=121-471+3⋅121-3⋅471+9⋅121+24⋅121-24⋅471+72⋅121##...
Back
Top