MHB Tensor Product - Knapp, Chapter VI, Section 6

Math Amateur
Gold Member
MHB
Messages
3,920
Reaction score
48
I am reading Anthony W. Knapp's book: Basic Algebra in order to understand tensor products ... ...

I need some help with an aspect of Theorem 6.10 in Section 6 of Chapter VI: Multilinear Algebra ...

The text of Theorem 6.10 reads as follows:https://www.physicsforums.com/attachments/5391
https://www.physicsforums.com/attachments/5392About midway in the above text, just at the start of "PROOF OF EXISTENCE", Knapp writes the following:

" ... ... Let $$V_1 = \bigoplus_{ (e,f) } \mathbb{K} (e, f)$$, the direct sum being taken over all ordered pairs $$(e,f)$$ with $$e \in E$$ and $$f \in F$$. ... ... "


I do not understand Knapp's notation for the exact sum ... what exactly does he mean by $$\bigoplus_{ (e,f) } \mathbb{K} (e, f)$$ ... ... ? What does he mean by the $$\mathbb{K} (e, f)$$ after the $$\bigoplus_{ (e,f) }$$ sign ... ? If others also find his notation perplexing then maybe those readers who have a good understanding of tensor products can interpret what he means from the flow of the proof ...Note that in his section on direct products Knapp uses standard notation and their is nothing in his earlier sections that I know of that gives a clue to the notation I am querying here ... if any readers request me to provide some of Knapp's text on the definition of direct products I will provide it ...Hope someone can help ...

Peter*** NOTE ***

To give readers an idea of Knapp's approach and notation regarding tensor products I am proving Knapp's introduction to Chapter VI, Section 6: Tensor Product of Two Vector Spaces ... ... ... as follows ... ... ... :https://www.physicsforums.com/attachments/5393
https://www.physicsforums.com/attachments/5394
 
Physics news on Phys.org
He means the same thing as Cooperstein and Winitzki, the free vector space over the field $\Bbb K$ generated by the SET $E \times F$.

As you may well appreciate, this vector space is HUGE, each pair $(e,f)$ is a basis element. As such, to get it to a "manageable size", we're going to take a QUOTIENT SPACE.

Specifically, we're going to quotient out (identify with 0) all the elements we need to to FORCE the map:

$(e,f) \mapsto e\otimes f$

to be bilinear.

In other words, $e\otimes f$ is a COSET, the coset $(e,f) + V_0$.

Since, by definition, we have (for example):

$(e_1 + e_2,f) - (e_1,f) - (e_2,f) \in V_0$,

we have $(e_1 + e_2,f) + V_0 = (e_1,f) + (e_2,f) + V_0$, that is:

$(e_1+e_2)\otimes f = e_1\otimes f + e_2\otimes f$.

In all fairness, Knapp is the "most correct": if you have infinitely many vector spaces, and you want to make a large vector space out of all of them, the correct thing to do is to use the DIRECT SUM, not the DIRECT PRODUCT. Cooperstein and Winitzki side-step this issue by only considering a finite number of spaces, and finite linear combinations. In these cases, the direct sum, and the direct product are isomorphic.

Knapp writes $\Bbb K(e,f)$ because he is considering each "factor" as the "line" consisting of:

$\{k(e,f) = (ke,kf): k \in \Bbb K\}$, where $e$ and $f$ are FIXED elements of $E$ and $F$, respectively. We are thus taking the direct sum of a "whole lotta lines", if $\Bbb K = E = F$, the free vector space over $\Bbb K$ generated by two copies of the real numbers has an UNCOUNTABLE basis (every point of the plane generates a line that lies in its "own dimension"). As I indicated before, this space is "so big" it's hard to actually imagine, whereas the tensor product $\Bbb R \otimes \Bbb R$ only has dimension 1 (and the tensor product $a\otimes b$ for real numbers $a,b$ is just ordinary multiplication).
 
Last edited:
Deveno said:
He means the same thing as Cooperstein and Winitzki, the free vector space over the field $\Bbb K$ generated by the SET $E \times F$.

As you may well appreciate, this vector space is HUGE, each pair $(e,f)$ is a basis element. As such, to get it to a "manageable size", we're going to take a QUOTIENT SPACE.

Specifically, we're going to quotient out (identify with 0) all the elements we need to to FORCE the map:

$(e,f) \mapsto e\otimes f$

to be bilinear.

In other words, $e\otimes f$ is a COSET, the coset $(e,f) + V_0$.

Since, by definition, we have (for example):

$(e_1 + e_2,f) - (e_1,f) - (e_2,f) \in V_0$,

we have $(e_1 + e_2,f) + V_0 = (e_1,f) + (e_2,f) + V_0$, that is:

$(e_1+e_2)\otimes f = e_1\otimes f + e_2\otimes f$.

In all fairness, Knapp is the "most correct": if you have infinitely many vector spaces, and you want to make a large vector space out of all of them, the correct thing to do is to use the DIRECT SUM, not the DIRECT PRODUCT. Cooperstein and Winitzki side-step this issue by only considering a finite number of spaces, and finite linear combinations. In these cases, the direct sum, and the direct product are isomorphic.

Knapp writes $\Bbb K(e,f)$ because he is considering each "factor" as the "line" consisting of:

$\{k(e,f) = (ke,kf): k \in \Bbb K\}$, where $e$ and $f$ are FIXED elements of $E$ and $F$, respectively. We are thus taking the direct sum of a "whole lotta lines", if $\Bbb K = E = F$, the free vector space over $\Bbb K$ generated by two copies of the real numbers has an UNCOUNTABLE basis (every point of the plane generates a line that lies in its "own dimension"). As I indicated before, this space is "so big" it's hard to actually imagine, whereas the tensor product $\Bbb R \otimes \Bbb R$ only has dimension 1 (and the tensor product $a\otimes b$ for real numbers $a,b$ is just ordinary multiplication).
Thank you Deveno ... That post was very clear and instructive ... EXTREMELY helpful ...

Your support has been critical to my achieving some basic understanding of the notion of tensor products ...

Still reflecting on all your posts on the topic ...

Thanks again,

Peter
 
Deveno said:
He means the same thing as Cooperstein and Winitzki, the free vector space over the field $\Bbb K$ generated by the SET $E \times F$.

As you may well appreciate, this vector space is HUGE, each pair $(e,f)$ is a basis element. As such, to get it to a "manageable size", we're going to take a QUOTIENT SPACE.

Specifically, we're going to quotient out (identify with 0) all the elements we need to to FORCE the map:

$(e,f) \mapsto e\otimes f$

to be bilinear.

In other words, $e\otimes f$ is a COSET, the coset $(e,f) + V_0$.

Since, by definition, we have (for example):

$(e_1 + e_2,f) - (e_1,f) - (e_2,f) \in V_0$,

we have $(e_1 + e_2,f) + V_0 = (e_1,f) + (e_2,f) + V_0$, that is:

$(e_1+e_2)\otimes f = e_1\otimes f + e_2\otimes f$.

In all fairness, Knapp is the "most correct": if you have infinitely many vector spaces, and you want to make a large vector space out of all of them, the correct thing to do is to use the DIRECT SUM, not the DIRECT PRODUCT. Cooperstein and Winitzki side-step this issue by only considering a finite number of spaces, and finite linear combinations. In these cases, the direct sum, and the direct product are isomorphic.

Knapp writes $\Bbb K(e,f)$ because he is considering each "factor" as the "line" consisting of:

$\{k(e,f) = (ke,kf): k \in \Bbb K\}$, where $e$ and $f$ are FIXED elements of $E$ and $F$, respectively. We are thus taking the direct sum of a "whole lotta lines", if $\Bbb K = E = F$, the free vector space over $\Bbb K$ generated by two copies of the real numbers has an UNCOUNTABLE basis (every point of the plane generates a line that lies in its "own dimension"). As I indicated before, this space is "so big" it's hard to actually imagine, whereas the tensor product $\Bbb R \otimes \Bbb R$ only has dimension 1 (and the tensor product $a\otimes b$ for real numbers $a,b$ is just ordinary multiplication).

Hi Deveno,

Thanks again for the help ... just reflecting on your post ... and have a basic question ... ...You write:


" ... ... Specifically, we're going to quotient out (identify with 0) all the elements we need to to FORCE the map:

$(e,f) \mapsto e\otimes f$

to be bilinear. ... ... ... "


My question is as follows:

Why, exactly, do we wish the map $(e,f) \mapsto e\otimes f$ to be bilinear ... ... ?


(My apologies in advance if you have somewhere answered this question somewhere before ... ... )

Peter
 
"Universal objects" (that is, objects which are defined by a universal mapping property), represent, in some sense, the "most general way to do something".

For example, a quotient object in a category that has homomorphisms is "the most general way to annihilate something".

Similarly, a free object is "the most general way to create more structure from less".

The guiding THEME in these kinds of constructions, is that any SPECIFIC quality we wish for test for, is related in some canonical way to the "universal object".

With tensor products, the quality we are seeking to "capture" is *multi-linearity*. The tensor product converts multi-linear maps to a linear map (in the most general sense, an $R$-module homomorphism).

So, in a sense, the map $\otimes: V \times W \to V\otimes W$ is "the grandfather of all bilinear maps", just as the quotient group is the grandfather of all "subgroup contracting maps", and the free group is the "grandfather of all group representations" (generators and relations).

For more than two factors, replace "bilinear" by "multilinear".

One of the defining feature of linear algebra, is that given a vector space $V$, we can create, for any other vector space $W$, a third vector space:

$\mathcal{L}(V,W)$, linear maps from $V$ to $W$.

This defines a "super-mapping":

$W \mapsto \mathcal{L}(V,W)$

(note our variables here are ENTIRE VECTOR SPACES).

Let's call this "super-mapping" $F$.

Now suppose we have a linear transformation: $T: W \to U$.

We can, evidently, define a mapping $F(T)$:

$\mathcal{L}(V,W) \to \mathcal{L}(V,U)$, by

for each $A \in \mathcal{L}(V,W)$, $[F(T)](A) = TA$.

Now here is where it gets interesting:

If we have two linear transformations:

$T:W \to U$ and $S:U \to X$

then $F(ST) = F(S)F(T)$.

So $F$ acts like a "super-homomorphism" on linear transformations between vector spaces.

It turns out that the tensor product is a sort of "inverse" (the correct term is adjoint) to this "super-homomorphism" $F$. And so, questions about vector spaces of linear transformations can be turned into questions of tensor products, and vice versa. It's sort of "similar" to how multiplication and factoring are "inverses" of each other; one leads us to make bigger and better things out of smaller things, and one let's us "break down" bigger things into more manageable "bites".

Now this is speaking rather fast-and-loose, but it's sort of the "why" of things.

****************

Short, but totally mystifying explanation: by definition, the tensor product is a multilinear map that satisfies its UMP.
 
Last edited:
Deveno said:
"Universal objects" (that is, objects which are defined by a universal mapping property), represent, in some sense, the "most general way to do something".

For example, a quotient object in a category that has homomorphisms is "the most general way to annihilate something".

Similarly, a free object is "the most general way to create more structure from less".

The guiding THEME in these kinds of constructions, is that any SPECIFIC quality we wish for test for, is related in some canonical way to the "universal object".

With tensor products, the quality we are seeking to "capture" is *multi-linearity*. The tensor product converts multi-linear maps to a linear map (in the most general sense, an $R$-module homomorphism).

So, in a sense, the map $\otimes: V \times W \to V\otimes W$ is "the grandfather of all bilinear maps", just as the quotient group is the grandfather of all "subgroup contracting maps", and the free group is the "grandfather of all group representations" (generators and relations).

For more than two factors, replace "bilinear" by "multilinear".

One of the defining feature of linear algebra, is that given a vector space $V$, we can create, for any other vector space $W$, a third vector space:

$\mathcal{L}(V,W)$, linear maps from $V$ to $W$.

This defines a "super-mapping":

$W \mapsto \mathcal{L,W}$

(note our variables here are ENTIRE VECTOR SPACES).

Let's call this "super-mapping" $F$.

Now suppose we have a linear transformation: $T: W \to U$.

We can, evidently, define a mapping $F(T)$:

$\mathcal{L}(V,W) \to \mathcal{L}(V,U)$, by

for each $A \in \mathcal{L}(V,W)$, $[F(T)](A) = TA$.

Now here is where it gets interesting:

If we have two linear transformations:

$T:W \to U$ and $S:U \to X$

then $F(ST) = F(S)F(T)$.

So $F$ acts like a "super-homomorphism" on linear transformations between vector spaces.

It turns out that the tensor product is a sort of "inverse" (the correct term is adjoint) to this "super-homomorphism" $F$. And so, questions about vector spaces of linear transformations can be turned into questions of tensor products, and vice versa. It's sort of "similar" to how multiplication and factoring are "inverses" of each other; one leads us to make bigger and better things out of smaller things, and one let's us "break down" bigger things into more manageable "bites".

Now this is speaking rather fast-and-loose, but it's sort of the "why" of things.

****************

Short, but totally mystifying explanation: by definition, the tensor product is a multilinear map that satisfies its UMP.
Thanks Deveno ... That was helpful ... and a bit challenging ...

Still reflecting on what you have said ...

Peter
 
I asked online questions about Proposition 2.1.1: The answer I got is the following: I have some questions about the answer I got. When the person answering says: ##1.## Is the map ##\mathfrak{q}\mapsto \mathfrak{q} A _\mathfrak{p}## from ##A\setminus \mathfrak{p}\to A_\mathfrak{p}##? But I don't understand what the author meant for the rest of the sentence in mathematical notation: ##2.## In the next statement where the author says: How is ##A\to...
The following are taken from the two sources, 1) from this online page and the book An Introduction to Module Theory by: Ibrahim Assem, Flavio U. Coelho. In the Abelian Categories chapter in the module theory text on page 157, right after presenting IV.2.21 Definition, the authors states "Image and coimage may or may not exist, but if they do, then they are unique up to isomorphism (because so are kernels and cokernels). Also in the reference url page above, the authors present two...
When decomposing a representation ##\rho## of a finite group ##G## into irreducible representations, we can find the number of times the representation contains a particular irrep ##\rho_0## through the character inner product $$ \langle \chi, \chi_0\rangle = \frac{1}{|G|} \sum_{g\in G} \chi(g) \chi_0(g)^*$$ where ##\chi## and ##\chi_0## are the characters of ##\rho## and ##\rho_0##, respectively. Since all group elements in the same conjugacy class have the same characters, this may be...
Back
Top