MHB Vector Spaces and Their Quotient Spaces - Simple Clarification Requested

Math Amateur
Gold Member
MHB
Messages
3,920
Reaction score
48
I am revising vector spaces and am looking at their quotient spaces in particular ...

I am looking at the theory and examples in Schaum's "Linear Algebra" (Fourth Edition) - pages 331-332.

Section 10.10 (pages 331-332) defines the cosets of a subspace as follows:
View attachment 2898
View attachment 2899
Following the above definition, we find Example 10.7 which reads as follows:
View attachment 2900
In the above example, the line $$v + W$$ should (according to the definition of a coset above) include all vectors $$v+ w$$ with $$ w \in W $$

BUT ... ... consider a particular vector $$ w $$ equal to a line segment in $$W$$, and then add $$v$$ ... ... we then have the situation depicted in Figure 1 below.
View attachment 2901

Clearly $$v + w$$ does not belong to $$v + W$$?

But according to the definition of $$v + W$$ given above, it should? - shouldn't it?

Can someone please clarify this issue for me ...?

Peter
 
Physics news on Phys.org
Hi Peter,

I think you're misunderstanding what's going on in Example 10.7, although they could have a better job with the wording. To obtain the coset $v + W$ in that example, they take each point $w\in W$ and draw the vector from $w$ to $v + w$. The endpoint of each vector lies in $v + W$. Moreover, the vectors are all parallel since they are in the direction of $\vec{v}$ (note: I'm using $\vec{v}$ to denote the graphical representation of $v $ as a vector from the origin).

In your picture, you took a vector $\vec{w}$ that lies in $W$ and added $\vec{v}$ to it to get the vector $\vec{v} + \vec{w}$. It is not the case that $\vec{v} + \vec{w}$ lies in $W$. That's how you were interpreting it, right?
 
You are confusing two similar definitions of vectors: the geometric definition, and the algebraic definition.

Let's say we have a point $P = (x,y)$ and a point $Q = (x',y')$.

The geometric vector $\vec{PQ}$ is the algebraic vector $(x'-x,y'-y)$. That is because we regard the geometric vector induced by the algebraic (coordinate vector, point) vector as having its tail at the origin $O$, and clearly $\vec{PQ}$ does not, its tail is at $P$. That is:

$(0,0) = O$
$(x,y) = \vec{OP}$
$(x',y') = \vec{OQ}$ and:

$\vec{OQ} = \vec{OP} + \vec{PQ}$ <---this is often called "the triangle rule for addition". This agrees with the algebraic rule:

$(x',y') = (x,y) + (x'-x,y'-y)$.

So let's take another look at your $v,w$ and $W$.

Suppose $P,Q$ lie on the line $W = \{(x,y) \in \Bbb R^2: y = mx\}$. We will suppose that $w = \vec{PQ}$.

Thus $P = (x_1,mx_1)$ and $Q = (x_2,mx_2)$. So:

$w = (x_2-x_1,m(x_2-x_1))$ <---"this" $w$ has its tail at the origin.

Suppose $v = \vec{PR}$, where $R = (a,b)$. Geometrically we can't even add $\vec{PR}$ and $\vec{PQ}$, the "heads and tails don't match". Note that the algebraic vector corresponding to $v$ is: $(a-x_1,b-mx_1)$.

Algebraically, we have:

$v + w = (a-x_1,b-mx_1) + (x_2-x_1,m(x_2-x_1)) = (a + x_2 - 2x_1, b + m(x_2 - 2x_1))$. Let's call this point $S$.

Consider the point $T = (x_1,mx_1) + (a+x_2-2x_1,b+m(x_2-2x_1)) = (a+x_2-x_1,b+m(x_2-x_1))$.

Then $\vec{PT} = \vec{OS}$.

What we are thinking of as the "translate" of $w$ is the (geometric) vector $\vec{RT}$. This corresponds to the algebraic vector:

$(a+x_2-x_1 - a,b+m(x_2-x_1) - b) = (x_2-x_1,m(x_2-x_1)) = \vec{PQ}$.

The reason for your confusion, is you are trying to take the "algebraic" sum of "geometric" vectors. In effect, you're trying to add a "point" to a "vector". If you're going to add "points", you need to work strictly algebraically, if you want to add:

$\vec{PQ} + \vec{PR}$, you need to do this:

$\vec{PQ} + \vec{PR} = \vec{PQ} + \vec{QT} = \vec{PT}$

$\vec{PT}$ does not lie on the coset $v + W$, it's the vector $\vec{RT}$ that does so.

**********************

Let's work this out another way. Suppose we want the line parallel to $W$ passing through the point $(a,b)$. This means this line has the same SLOPE as $W$, so its equation is:

$y = mx + c$, for some real number $c$. Since $(a,b)$ lies on this line:

$b = ma + c$, that is:

$c = b - ma$. So our parallel line (let's call it $L$) has equation:

$y = mx + b - ma = m(x - a) + b$. A typical point on this line is:

$(x_0,m(x_0 - a) + b)$.

Now imagine any point on $W$, say, $(x_1,mx_1)$.

I claim that $(x_1,mx_1) + (a,b) = (x_1+a,mx_1+b)$ lies on on $L$. To prove this, we need to show that:

$mx_1 + b = m((x_1+a) - a) + b$, which is clearly true.

Conversely, suppose a point $(x,y)$ lies on $L$. I claim there is some point $(r,s)$ on $W$ such that:

$(r,s) + (a,b) = (x,y)$. To see this, let:

$r = x-a, s = y-b$.

We need to show $s = mr$. Since $(x,y)$ lies on $L$, we know $y = m(x-a) + b$.

Therefore:

$s = y - b = [m(x-a) + b] - b = m(x - a) = mr$.

This means that $L = \{(x,y) \in \Bbb R^2: (x,y) = (a,b) + (x_0,y_0),\text{ for } (x_0,y_0) \in W\}$

$= (a,b) + W$.

I'm not a big fan of "drawing vectors". I find it easier to see the plane $\Bbb R^2$ as a direct sum of the two abelian groups:

$\Bbb R \times \{0\}$ and $\{0\} \times \Bbb R$ (often called the $x$-axis, and the $y$-axis).<--I write it this way solely to call attention to the fact that our two copies of $\Bbb R$ are not "the same one".

The addition is the normal direct sum addition we get on the underlying set of the cartesian product of $\Bbb R$ with itself.

Since $\Bbb R$ is a field, we easily get it is a $\Bbb R$-module (over itself), and we can use this $\Bbb R$-action (which is ordinary real number multiplication) to induce a $\Bbb R$-action on the direct sum (of abelian groups) in the usual way:

$r\cdot(x,y) = (r\cdot x, r\cdot y) = (rx,ry)$.

We could, if we really wanted to obscure what was happening here, write this as:

$r\cdot [(x,0)\oplus (0,y)] = (r\cdot(x,0))\oplus(r\cdot(0,y))$

But it is common to draw no distinction between vector addition, and scalar addition (the + symbol is thus "over-loaded"). Be careful of this--make sure you are always adding "two like things".

******************************

Quotient objects are difficult to wrap one's mind around, in almost any setting. These are some visual analogies that may, or may not help.

Let's deal with 3-space, as higher dimensions are harder to imagine, for most people. If we are going to form a quotient space, we have 4 types of subspaces we can quotient by:

1. The origin. The quotient space is thus the original space. Boring!

2. The entire space. We get just a single coset, the entire space, which may as well be $0 + V$, the origin of our quotient space. We, in effect, shrink everything to a point. This is also boring.

3. A line. This is interesting. Here, the cosets are a "bundle of parallel lines" (I like to think of a stack of spaghetti, but infinitely long, and infinitely thin-maybe angel hair pasta...). To specify where we are in this "spaghetti space", we need to know "which strand we're on". Thus this space is inherently 2-dimensional, as we can locate which spaghetti strand we're on by imaging a plane that slices through our spaghetti stack, and specifying a point on the plane. This is essentially the same as shrinking the spaghetti stick that goes through the origin (and all the other strands "in parallel") to a point, we "collapse" one dimension, leaving 2.

4. A plane. Now our cosets are like "cards in a deck" (or different plies in some laminated material, like plywood), consisting of parallel planes, stacked like pancakes. We only need to specify now, "which sheet we're on", which we can do via a line that passes through the "entire deck". So our coset space is inherently 1-dimensional (even though the "points" in our space are "two-dimensional planes"). This is essentially the same as shrinking the "home plane" (the sheet that goes through the origin) to a single point, collapsing two dimensions, leaving only one.

The above reveals a startling fact: we can have a vector space, whose "points" are themselves $n$-dimensional "objects"! The usual example is the set of $m \times n$ matrices, which we can think of as an $n$-dimensional space of $m$-dimensional vectors, giving us an $mn$-dimensional vector space. This example is actually very important, it turns out that "vector homomorphisms" are actually these "vectors of vectors".

This is a very "happy event", for example, with groups, it is NOT the case that the set of all group homomorphisms $f:G \to G'$ itself forms a group (the trivial map has no inverse under composition, usually). This means that most of what we learn about "vectors" can also be transferred straight-across to linear transformations. It's like a 2-for-1 special.
 
Last edited:
Amazing explanation, Deveno!
 
Euge said:
Hi Peter,

I think you're misunderstanding what's going on in Example 10.7, although they could have a better job with the wording. To obtain the coset $v + W$ in that example, they take each point $w\in W$ and draw the vector from $w$ to $v + w$. The endpoint of each vector lies in $v + W$. Moreover, the vectors are all parallel since they are in the direction of $\vec{v}$ (note: I'm using $\vec{v}$ to denote the graphical representation of $v $ as a vector from the origin).

In your picture, you took a vector $\vec{w}$ that lies in $W$ and added $\vec{v}$ to it to get the vector $\vec{v} + \vec{w}$. It is not the case that $\vec{v} + \vec{w}$ lies in $W$. That's how you were interpreting it, right?

Hi Euge ... Yes, that is how I was interpreting it ...

Thanks for the help ... appreciate it!

Peter
 
Deveno said:
You are confusing two similar definitions of vectors: the geometric definition, and the algebraic definition.

Let's say we have a point $P = (x,y)$ and a point $Q = (x',y')$.

The geometric vector $\vec{PQ}$ is the algebraic vector $(x'-x,y'-y)$. That is because we regard the geometric vector induced by the algebraic (coordinate vector, point) vector as having its tail at the origin $O$, and clearly $\vec{PQ}$ does not, its tail is at $P$. That is:

$(0,0) = O$
$(x,y) = \vec{OP}$
$(x',y') = \vec{OQ}$ and:

$\vec{OQ} = \vec{OP} + \vec{PQ}$ <---this is often called "the triangle rule for addition". This agrees with the algebraic rule:

$(x',y') = (x,y) + (x'-x,y'-y)$.

So let's take another look at your $v,w$ and $W$.

Suppose $P,Q$ lie on the line $W = \{(x,y) \in \Bbb R^2: y = mx\}$. We will suppose that $w = \vec{PQ}$.

Thus $P = (x_1,mx_1)$ and $Q = (x_2,mx_2)$. So:

$w = (x_2-x_1,m(x_2-x_1))$ <---"this" $w$ has its tail at the origin.

Suppose $v = \vec{PR}$, where $R = (a,b)$. Geometrically we can't even add $\vec{PR}$ and $\vec{PQ}$, the "heads and tails don't match". Note that the algebraic vector corresponding to $v$ is: $(a-x_1,b-mx_1)$.

Algebraically, we have:

$v + w = (a-x_1,b-mx_1) + (x_2-x_1,m(x_2-x_1)) = (a + x_2 - 2x_1, b + m(x_2 - 2x_1))$. Let's call this point $S$.

Consider the point $T = (x_1,mx_1) + (a+x_2-2x_1,b+m(x_2-2x_1)) = (a+x_2-x_1,b+m(x_2-x_1))$.

Then $\vec{PT} = \vec{OS}$.

What we are thinking of as the "translate" of $w$ is the (geometric) vector $\vec{RT}$. This corresponds to the algebraic vector:

$(a+x_2-x_1 - a,b+m(x_2-x_1) - b) = (x_2-x_1,m(x_2-x_1)) = \vec{PQ}$.

The reason for your confusion, is you are trying to take the "algebraic" sum of "geometric" vectors. In effect, you're trying to add a "point" to a "vector". If you're going to add "points", you need to work strictly algebraically, if you want to add:

$\vec{PQ} + \vec{PR}$, you need to do this:

$\vec{PQ} + \vec{PR} = \vec{PQ} + \vec{QT} = \vec{PT}$

$\vec{PT}$ does not lie on the coset $v + W$, it's the vector $\vec{RT}$ that does so.

**********************

Let's work this out another way. Suppose we want the line parallel to $W$ passing through the point $(a,b)$. This means this line has the same SLOPE as $W$, so its equation is:

$y = mx + c$, for some real number $c$. Since $(a,b)$ lies on this line:

$b = ma + c$, that is:

$c = b - ma$. So our parallel line (let's call it $L$) has equation:

$y = mx + b - ma = m(x - a) + b$. A typical point on this line is:

$(x_0,m(x_0 - a) + b)$.

Now imagine any point on $W$, say, $(x_1,mx_1)$.

I claim that $(x_1,mx_1) + (a,b) = (x_1+a,mx_1+b)$ lies on on $L$. To prove this, we need to show that:

$mx_1 + b = m((x_1+a) - a) + b$, which is clearly true.

Conversely, suppose a point $(x,y)$ lies on $L$. I claim there is some point $(r,s)$ on $W$ such that:

$(r,s) + (a,b) = (x,y)$. To see this, let:

$r = x-a, s = y-b$.

We need to show $s = mr$. Since $(x,y)$ lies on $L$, we know $y = m(x-a) + b$.

Therefore:

$s = y - b = [m(x-a) + b] - b = m(x - a) = mr$.

This means that $L = \{(x,y) \in \Bbb R^2: (x,y) = (a,b) + (x_0,y_0),\text{ for } (x_0,y_0) \in W\}$

$= (a,b) + W$.

I'm not a big fan of "drawing vectors". I find it easier to see the plane $\Bbb R^2$ as a direct sum of the two abelian groups:

$\Bbb R \times \{0\}$ and $\{0\} \times \Bbb R$ (often called the $x$-axis, and the $y$-axis).<--I write it this way solely to call attention to the fact that our two copies of $\Bbb R$ are not "the same one".

The addition is the normal direct sum addition we get on the underlying set of the cartesian product of $\Bbb R$ with itself.

Since $\Bbb R$ is a field, we easily get it is a $\Bbb R$-module (over itself), and we can use this $\Bbb R$-action (which is ordinary real number multiplication) to induce a $\Bbb R$-action on the direct sum (of abelian groups) in the usual way:

$r\cdot(x,y) = (r\cdot x, r\cdot y) = (rx,ry)$.

We could, if we really wanted to obscure what was happening here, write this as:

$r\cdot [(x,0)\oplus (0,y)] = (r\cdot(x,0))\oplus(r\cdot(0,y))$

But it is common to draw no distinction between vector addition, and scalar addition (the + symbol is thus "over-loaded"). Be careful of this--make sure you are always adding "two like things".

******************************

Quotient objects are difficult to wrap one's mind around, in almost any setting. These are some visual analogies that may, or may not help.

Let's deal with 3-space, as higher dimensions are harder to imagine, for most people. If we are going to form a quotient space, we have 4 types of subspaces we can quotient by:

1. The origin. The quotient space is thus the original space. Boring!

2. The entire space. We get just a single coset, the entire space, which may as well be $0 + V$, the origin of our quotient space. We, in effect, shrink everything to a point. This is also boring.

3. A line. This is interesting. Here, the cosets are a "bundle of parallel lines" (I think to think of a stack of spaghetti, but infinitely long, and infinitely thin-maybe angel hair pasta...). To specify where we are in this "spaghetti space", we need to know "which strand we're on". Thus this space is inherently 2-dimensional, as we can locate which spaghetti strand we're on by imaging a plane that slices through our spaghetti stack, and specifying a point on the plane. This is essentially the same as shrinking the spaghetti stick that goes through the origin (and all the other strands "in parallel") to a point, we "collapse" one dimension, leaving 2.

4. A plane. Now our cosets are like "cards in a deck" (or different plies in some laminated material, like plywood), consisting of parallel planes, stacked like pancakes. We only need to specify now, "which sheet we're on", which we can do via a line that passes through the "entire deck". So our coset space is inherently 1-dimensional (even though the "points" in our space are "two-dimensional planes"). This is essentially the same as shrinking the "home plane" (the sheet that goes through the origin) to a single point, collapsing two dimensions, leaving only one.

The above reveals a startling fact: we can have a vector space, whose "points" are themselves $n$-dimensional "objects"! The usual example is the set of $m \times n$ matrices, which we can think of as an $n$-dimensional space of $m$-dimensional vectors, giving us an $mn$-dimensional vector space. This example is actually very important, it turns out that "vector homomorphisms" are actually these "vectors of vectors".

This is a very "happy event", for example, with groups, it is NOT the case that the set of all group homomorphisms $f:G \to G'$ itself forms a group (the trivial map has no inverse under composition, usually). This means that most of what we learn about "vectors" can also be transferred straight-across to linear transformations. It's like a 2-for-1 special.
Exceptionally helpful post ... Thanks Deveno

I am just now working through it carefully ... but just a skim read of the contents shows me that you are really addressing the issues that were confusing me ... Really helpful!

Thanks again ...

Peter
 
Back
Top