MHB Two Versions of the Correspondence Theorem for Vector Spaces

Math Amateur
Gold Member
MHB
Messages
3,920
Reaction score
48
Cooperstein (in Advanced Linear Algebra) and Roman (also in a book called Advanced Linear Algebra) give versions of the Correspondence Theorem for Vector Spaces ... but these 'versions' do not look like the same theorem ... can someone please explain how/why these two versions are actually the same theorem ...Coopersteins version of the Correspondence Theorem for Vector Spaces reads as follows:View attachment 5177Roman's version of the Correspondence Theorem for Vector Spaces reads as follows:https://www.physicsforums.com/attachments/5178

Can someone explain how/why these two articulations of the Correspondence Theorem for Vector Spaces are actually the same theorem ... indeed maybe they are not exactly the same theorem ... ?

Hope someone can clarify this issue ...

Peter
 
Physics news on Phys.org
Well, Rotman's version is a bit closer to the full depth of the correspondence theorem: it is a LATTICE isomorphism.

Before I go further, let's look at a particular instance of this.

Suppose $V = \Bbb R^3$, and we have the vector-space homomorphism (linear transformation):

$T: \Bbb R^3 \to \Bbb R^2$ given by $T((x,y,z)) = (x,y)$.

It is easy to see that $U = \text{ker }T = \{(0,0,z): z \in \Bbb R\} = \langle(0,0,1)\rangle$ (this is the $z$-axis).

So let's look at the lattice of subspaces of $V$ that contain $U$. Since there are infinitely many of these, we will resign ourselves to merely classifying these by type.

There is only one $3$-dimensional subspace of $V$, namely $\Bbb R^3$ itself. This gets mapped via $T$ to $\Bbb R^2$ ($T$ is clearly onto).

The $2$-dimensional subspaces are more interesting: since these are planes, and they include $U$, we can specify them by specifying a vector $(x,y,0) \neq (0,0,0)$ (note that $\{(x,y,0),(0,0,1)\}$ will be linearly independent, so the span of these two vectors will determine a plane that contains the $z$-axis).

$T$ maps this plane to the line generated by $(x,y) \neq (0,0)$. It's not hard to see that that every line in $\Bbb R^2$ is mapped by $T$ from a unique plane in $\Bbb R^3$ (extra-credit:

prove $\text{span}(\{(x,y,z),(0,0,1)\}) = \text{span}(\{(x,y,0),(0,0,1)\})$).

The one-dimensional subspaces of $V$ containing $U$ are, of course, limited to $U$ itself, which gets sent via $T$ to the $0$-subspace of $\Bbb R^2$.

So we see the 1-1 correspondence here, explicitly:

$\Bbb R^3 \leftrightarrow \Bbb R^2$
$\langle (x,y,z),(0,0,1)\rangle \leftrightarrow \langle (x,y) \rangle$
$U \leftrightarrow \{(0,0)\}$.

The only thing missing is to identify each subspace of $\text{im }T$ with a (unique) subspace of $V/U$.

Note that since $(x,y,z) - (x,y,0) = (0,0,z) \in U$, we have:

$(x,y,z) + U = (x,y,0) + U$.

So the (linear) mapping $(x,y,z) + U \mapsto (x,y)$ (which maps from $V/U$ to $\Bbb R^2$) is onto (one pre-image of any $(x,y) \in \Bbb R^2$ is the coset $(x,y,0) + U$). Furthermore, the pre-image of $(0,0)$ is the coset $U$ (the identity of $V/U$), so our mapping is bijective (and thus a vector space isomorphism). This same correspondence works for any subspace of $\Bbb R^3$. So our "final" correspondence is:

$V \leftrightarrow V/U$
$W \leftrightarrow W/U$ (where $W$ is a plane containing $U$-we can think of $W/U$ as a series of vertical lines going "straight up" from some line $L$ in the $xy$-plane).
$U \leftrightarrow U/U = \{0_{V/U}\}$.

Essentially, what we achieve by "modding out $U$", is to project any 3-dimensional vector in 3-space, to its 2-dimensional "shadow" on the $xy$-plane.

I'll write more later pointing out the "bridge" between the two versions.
 
Deveno said:
Well, Rotman's version is a bit closer to the full depth of the correspondence theorem: it is a LATTICE isomorphism.

Before I go further, let's look at a particular instance of this.

Suppose $V = \Bbb R^3$, and we have the vector-space homomorphism (linear transformation):

$T: \Bbb R^3 \to \Bbb R^2$ given by $T((x,y,z)) = (x,y)$.

It is easy to see that $U = \text{ker }T = \{(0,0,z): z \in \Bbb R\} = \langle(0,0,1)\rangle$ (this is the $z$-axis).

So let's look at the lattice of subspaces of $V$ that contain $U$. Since there are infinitely many of these, we will resign ourselves to merely classifying these by type.

There is only one $3$-dimensional subspace of $V$, namely $\Bbb R^3$ itself. This gets mapped via $T$ to $\Bbb R^2$ ($T$ is clearly onto).

The $2$-dimensional subspaces are more interesting: since these are planes, and they include $U$, we can specify them by specifying a vector $(x,y,0) \neq (0,0,0)$ (note that $\{(x,y,0),(0,0,1)\}$ will be linearly independent, so the span of these two vectors will determine a plane that contains the $z$-axis).

$T$ maps this plane to the line generated by $(x,y) \neq (0,0)$. It's not hard to see that that every line in $\Bbb R^2$ is mapped by $T$ from a unique plane in $\Bbb R^3$ (extra-credit:

prove $\text{span}(\{(x,y,z),(0,0,1)\}) = \text{span}(\{(x,y,0),(0,0,1)\})$).

The one-dimensional subspaces of $V$ containing $U$ are, of course, limited to $U$ itself, which gets sent via $T$ to the $0$-subspace of $\Bbb R^2$.

So we see the 1-1 correspondence here, explicitly:

$\Bbb R^3 \leftrightarrow \Bbb R^2$
$\langle (x,y,z),(0,0,1)\rangle \leftrightarrow \langle (x,y) \rangle$
$U \leftrightarrow \{(0,0)\}$.

The only thing missing is to identify each subspace of $\text{im }T$ with a (unique) subspace of $V/U$.

Note that since $(x,y,z) - (x,y,0) = (0,0,z) \in U$, we have:

$(x,y,z) + U = (x,y,0) + U$.

So the (linear) mapping $(x,y,z) + U \mapsto (x,y)$ (which maps from $V/U$ to $\Bbb R^2$) is onto (one pre-image of any $(x,y) \in \Bbb R^2$ is the coset $(x,y,0) + U$). Furthermore, the pre-image of $(0,0)$ is the coset $U$ (the identity of $V/U$), so our mapping is bijective (and thus a vector space isomorphism). This same correspondence works for any subspace of $\Bbb R^3$. So our "final" correspondence is:

$V \leftrightarrow V/U$
$W \leftrightarrow W/U$ (where $W$ is a plane containing $U$-we can think of $W/U$ as a series of vertical lines going "straight up" from some line $L$ in the $xy$-plane).
$U \leftrightarrow U/U = \{0_{V/U}\}$.

Essentially, what we achieve by "modding out $U$", is to project any 3-dimensional vector in 3-space, to its 2-dimensional "shadow" on the $xy$-plane.

I'll write more later pointing out the "bridge" between the two versions.

Thanks so much for the help, Deveno ... particularly appreciate the example ... the examples are extremely helpful to one's understanding ...

I am working through the detail of your post now ... but just a quick question/clarification ...

You write:

" ... ... There is only one $3$-dimensional subspace of $V$, namely $\Bbb R^3$ itself. This gets mapped via $T$ to $\Bbb R^2$ ($T$ is clearly onto).

The $2$-dimensional subspaces are more interesting: since these are planes, and they include $U$, we can specify them by specifying a vector $(x,y,0) \neq (0,0,0)$ (note that $\{(x,y,0),(0,0,1)\}$ will be linearly independent, so the span of these two vectors will determine a plane that contains the $z$-axis). ... ... "


Question 1

You mention in the above text that the span of two linearly independent vectors determine a plane ... BUT ... ...

I am struggling to link the span of two linearly independent vectors with the usual equation of a plane ... ... that is ... $$n \bullet p = 0$$ ... where n is a vector normal to the plane and p is a vector in the plane joining a particular point in the plane to a general point in the plane ...

... specifically then how does the span of two linearly independent vectors give us a plane ... that is, how does the span of two linearly independent vectors lead to or result in the equation of a plane ... ? ... ...

Basically I am asking how do we demonstrate formally and rigorously that span of two linearly independent vectors gives us a plane ... ...

... ... ...
Question 2Along the same lines ... you write:

" ... ... $T$ maps this plane to the line generated by $(x,y) \neq (0,0)$. It's not hard to see that that every line in $\Bbb R^2$ is mapped by $T$ from a unique plane in $\Bbb R^3$ ... ..."

My second question then is as follows:

If a line is generated by $$(x, y)$$ then we ought to be able to show that $$c(x, y)$$ where $$c$$ ranges over $$\mathbb{R}$$ gives us the equation of a line ... (this is right isn't it?) ... how do we do this? ... or if this is not the right approach, then how do we demonstrate formally and rigorously that the subspace generated by $$(x, y)$$ is a line ... ...

... ... ...Question 3

In your text that I have quoted in Question 2 you write:

" ... ... every line in $\Bbb R^2$ is mapped by $T$ from a unique plane in $\Bbb R^3$ ... "

How do we formally and rigorously demonstrate this ...?

Can you help ...?
The basic reference I am using for the vector equation of a plane is Susan Colley's "Vector Calculus" (Second Edition) ... Colley introduces the vector equation of a plane in $$\mathbb{R}^3$$ as follows:View attachment 5182Hope you can help ...

Peter
 
Last edited:
Well, two (linearly independent) vectors determine a plane...but the kind of plane they determine is special, because, being a vector space it has to pass through the 0-vector.

So, for example, we have $P_0 = (0,0,0)$ for any planar subspace. So that means we can write it as:

$Ax + By + Cz = 0$ (since $P_0 = (x_0,y_0,z_0) = (0,0,0)$).

Since we have $U$ contained in our plane, we have that any vector of the form $(0,0,z)$ is in our plane, so this gives us (using $z = 1$):

$A0 + B0 + C = 0$ which forces $C = 0$.

So our plane has the equation:

$Ax + By = 0$, which is the equation for a line $L$ in the $xy$-plane, and our plane, is of the form:

$\{(x,y,z) \in \Bbb R^3: (x,y) \in L, z \in \Bbb R\}$.

If our plane is non-degenerate (that is $A$ and $B$ are not BOTH zero), then we can write either:

$y = -(B/A)x$, and our points in our plane are of the form:

$(x,-(B/A)x,z)$ which is spanned by the vectors $\{(1,-B/A,0),(0,0,1)\}$, or:

$x = -(A/B)y$ and our plane points have the form $(-(A/B)y,y,z)$ which is spanned by $\{(-A/B,1,0),(0,0,1)\}$.

But let's find two LI vectors such that $\mathbf{n}\cdot \vec{P} = 0$.

Since $(0,0,1)$ is in our plane, if $\mathbf{n} = (n_1,n_2,n_3)$, the condition:

$\mathbf{n}\cdot \vec{P} = 0$ gives $n_3 = 0$.

I claim we can pick $(x,y,z)$ such that $(x,y,z)\cdot (n_1,n_2,0) = 0$ and $\{(x,y,z),(0,0,1)\}$ are LI. Furthermore, you can see by inspection that it makes no difference what $z$ is (since $n_3 = 0$) so we'll make it easy on ourselves by setting $z = 0$.

We will assume $\mathbf{n} \neq (0,0,0)$ (usually $\mathbf{n}$ is chosen to be a unit vector, to avoid this troublesome possibility-everything is perpendicular to the origin).

Let's suppose $n_2 \neq 0$. (I leave the case $n_1 \neq 0$ to you). Chose $y = -\dfrac{n_1}{n_2}x$. This satisfies our requirements. It is not hard to see that:

$\left\{\left(1,-\dfrac{n_1}{n_2},0\right),(0,0,1)\right\}$ are LI, and we are done.

Conversely, it is clear that $\{(x,y,0),(0,0,1)\}$ is LI, provided not both $x$ and $y$ are $0$. Set:

$W = \text{span}(\{(x,y,0),(0,0,1)\})$. Let's suppose $x \neq 0$ (you can do the $y \neq 0$ case).

Set $\mathbf{u} = \left(\dfrac{y}{x},-1,0\right)$ and take $\mathbf{n} = \dfrac{\mathbf{u}}{\|\mathbf{u}\|}$.

If $\mathbf{w} \in W$, we have:

$\mathbf{w}\cdot\mathbf{n} = \dfrac{1}{\sqrt{\dfrac{y^2}{x^2}+1}}(ax,ay,bz)\cdot\left(\dfrac{y}{x},-1,0\right)$

$= \dfrac{1}{\sqrt{\dfrac{y^2}{x^2}+1}}(ay - ay + 0) = 0$.

On the other hand if $\mathbf{v}\cdot\mathbf{n} = 0$, then writing $\mathbf{v} = (a,b,c) = (a,b,0) + (0,0,c)$ leads us to conclude:

$a\dfrac{y}{x} - b = 0$ (since $\mathbf{v}\cdot\mathbf{n} =0 \implies \mathbf{v}\cdot\mathbf{u} =0)$, that is:

$b = a\dfrac{y}{x}$, so that $(a,b,0) = \left(a,a\dfrac{y}{x},0\right) = \dfrac{a}{x}(x,y,0)$, thus:

$\mathbf{v} = (a,b,c) = \dfrac{a}{x}(x,y,0) + c(0,0,1) \in W$.

I'll address your question about uniqueness in another post.
 
Deveno said:
Well, two (linearly independent) vectors determine a plane...but the kind of plane they determine is special, because, being a vector space it has to pass through the 0-vector.

So, for example, we have $P_0 = (0,0,0)$ for any planar subspace. So that means we can write it as:

$Ax + By + Cz = 0$ (since $P_0 = (x_0,y_0,z_0) = (0,0,0)$).

Since we have $U$ contained in our plane, we have that any vector of the form $(0,0,z)$ is in our plane, so this gives us (using $z = 1$):

$A0 + B0 + C = 0$ which forces $C = 0$.

So our plane has the equation:

$Ax + By = 0$, which is the equation for a line $L$ in the $xy$-plane, and our plane, is of the form:

$\{(x,y,z) \in \Bbb R^3: (x,y) \in L, z \in \Bbb R\}$.

If our plane is non-degenerate (that is $A$ and $B$ are not BOTH zero), then we can write either:

$y = -(B/A)x$, and our points in our plane are of the form:

$(x,-(B/A)x,z)$ which is spanned by the vectors $\{(1,-B/A,0),(0,0,1)\}$, or:

$x = -(A/B)y$ and our plane points have the form $(-(A/B)y,y,z)$ which is spanned by $\{(-A/B,1,0),(0,0,1)\}$.

But let's find two LI vectors such that $\mathbf{n}\cdot \vec{P} = 0$.

Since $(0,0,1)$ is in our plane, if $\mathbf{n} = (n_1,n_2,n_3)$, the condition:

$\mathbf{n}\cdot \vec{P} = 0$ gives $n_3 = 0$.

I claim we can pick $(x,y,z)$ such that $(x,y,z)\cdot (n_1,n_2,0) = 0$ and $\{(x,y,z),(0,0,1)\}$ are LI. Furthermore, you can see by inspection that it makes no difference what $z$ is (since $n_3 = 0$) so we'll make it easy on ourselves by setting $z = 0$.

We will assume $\mathbf{n} \neq (0,0,0)$ (usually $\mathbf{n}$ is chosen to be a unit vector, to avoid this troublesome possibility-everything is perpendicular to the origin).

Let's suppose $n_2 \neq 0$. (I leave the case $n_1 \neq 0$ to you). Chose $y = -\dfrac{n_1}{n_2}x$. This satisfies our requirements. It is not hard to see that:

$\left\{\left(1,-\dfrac{n_1}{n_2},0\right),(0,0,1)\right\}$ are LI, and we are done.

Conversely, it is clear that $\{(x,y,0),(0,0,1)\}$ is LI, provided not both $x$ and $y$ are $0$. Set:

$W = \text{span}(\{(x,y,0),(0,0,1)\})$. Let's suppose $x \neq 0$ (you can do the $y \neq 0$ case).

Set $\mathbf{u} = \left(\dfrac{y}{x},-1,0\right)$ and take $\mathbf{n} = \dfrac{\mathbf{u}}{\|\mathbf{u}\|}$.

If $\mathbf{w} \in W$, we have:

$\mathbf{w}\cdot\mathbf{n} = \dfrac{1}{\sqrt{\dfrac{y^2}{x^2}+1}}(ax,ay,bz)\cdot\left(\dfrac{y}{x},-1,0\right)$

$= \dfrac{1}{\sqrt{\dfrac{y^2}{x^2}+1}}(ay - ay + 0) = 0$.

On the other hand if $\mathbf{v}\cdot\mathbf{n} = 0$, then writing $\mathbf{v} = (a,b,c) = (a,b,0) + (0,0,c)$ leads us to conclude:

$a\dfrac{y}{x} - b = 0$ (since $\mathbf{v}\cdot\mathbf{n} =0 \implies \mathbf{v}\cdot\mathbf{u} =0)$, that is:

$b = a\dfrac{y}{x}$, so that $(a,b,0) = \left(a,a\dfrac{y}{x},0\right) = \dfrac{a}{x}(x,y,0)$, thus:

$\mathbf{v} = (a,b,c) = \dfrac{a}{x}(x,y,0) + c(0,0,1) \in W$.

I'll address your question about uniqueness in another post.
Thanks Deveno ... Most helpful ...

Just working through your post in detail ...

Thanks again ...

Peter
 
I asked online questions about Proposition 2.1.1: The answer I got is the following: I have some questions about the answer I got. When the person answering says: ##1.## Is the map ##\mathfrak{q}\mapsto \mathfrak{q} A _\mathfrak{p}## from ##A\setminus \mathfrak{p}\to A_\mathfrak{p}##? But I don't understand what the author meant for the rest of the sentence in mathematical notation: ##2.## In the next statement where the author says: How is ##A\to...
The following are taken from the two sources, 1) from this online page and the book An Introduction to Module Theory by: Ibrahim Assem, Flavio U. Coelho. In the Abelian Categories chapter in the module theory text on page 157, right after presenting IV.2.21 Definition, the authors states "Image and coimage may or may not exist, but if they do, then they are unique up to isomorphism (because so are kernels and cokernels). Also in the reference url page above, the authors present two...
When decomposing a representation ##\rho## of a finite group ##G## into irreducible representations, we can find the number of times the representation contains a particular irrep ##\rho_0## through the character inner product $$ \langle \chi, \chi_0\rangle = \frac{1}{|G|} \sum_{g\in G} \chi(g) \chi_0(g)^*$$ where ##\chi## and ##\chi_0## are the characters of ##\rho## and ##\rho_0##, respectively. Since all group elements in the same conjugacy class have the same characters, this may be...
Back
Top