# Tensor Algebras - Cooperstein Theorem 10.8

Gold Member
I am reading Bruce N. Coopersteins book: Advanced Linear Algebra (Second Edition) ... ...

I am focused on Section 10.3 The Tensor Algebra ... ...

I need help in order to get a basic understanding of Theorem 10.8 which is a Theorem concerning the direct sum of a family of subspaces as a solution to a UMP ... the theorem is preliminary to tensor algebras ...

I am struggling to understand how the function ##G## as defined in the proof actually gives us ##G \circ \epsilon_i = g_i## ... ... if I see the explicit mechanics of this I may understand the functions involved better ... and hence the whole theorem better ...

Theorem 10.8 (plus some necessary definitions and explanations) reads as follows:

In the above we read the following:

" ... ... Then define

##G(f) = \sum_{j = 1}^t g_{i_j} (f(i_j)) ##

We leave it to the reader to show that this is a linear transformation and if ##G## exists then it must be defined in this way, that is, it is unique. ... ... "

(1) demonstrate explicitly, clearly and in detail that ##G(f) = \sum_{j = 1}^t g_{i_j} (f(i_j)) ## satisfies ##G \circ \epsilon_i = g_i## (if I understand the detail of this then I may well understand the functions involved better, and in turn, understand the theorem better ...)

(2) show that ##G## is a linear transformation and, further, that if ##G## exists then it must be defined in this way, that is, it is unique.

Hope that someone can help ...

Peter

#### Attachments

• 95.1 KB Views: 521

andrewkirk
Homework Helper
Gold Member
We are given that ##spt(f)=\{i_1,....,i_t\}##. That means that ##\exists v_{i_1}\in V_{i_1},...,v_{i_t}\in V_{i_t}## such that
$$f=\epsilon_{i_1}(v_{i_1})+...+\epsilon_{i_t}(v_{i_t})$$

Then for ##k\in \{1,...,t\}## we have, by replacing ##f## by ##\epsilon_{i}(v_{i})## in the definition of ##G(f)##:

$$G\circ\epsilon_{i}(v_{i})\equiv G(\epsilon_{i}(v_{i}))\equiv\sum_{j=1}^t g_{i_j}(\epsilon_{i}(v_{i})(i_j))$$

In your earlier thread ##\epsilon_k## was defined as the function from ##V_k## to the direct product ##V## that maps ##v_k## to
##(0,0,.....0,v_k,0,....,0)## where ##v_k## is in the ##k##th position. That only covers finite direct sums. However it looks from the above as though Cooperstein was - between the two sections you quoted - moved on to defining and allowing infinite direct sums (because of his use of an index set ##I## of unspecified cardinality, rather than just labelling the component spaces as ##V_1## to ##V_n##). That means that ##\epsilon_k## needs an appropriately modified definition. I'm guessing the definition he's using is something like that ##\epsilon_i:V_i\to V## such that ##\epsilon_i(v_i)(j)## is zero for all ##j\in I## except ##i##, for which it gives ##v_i##.

Applying that to the above equation, we have

$$G\circ\epsilon_{i}(v_{i})\equiv \sum_{j=1}^t g_{i_j}(\epsilon_{i}(v_{i})(i_j)) =g_{i}(v_i)$$
as required.

Last edited:
Math Amateur
Gold Member
Thanks so much, Andrew ...

Just working through your post and reflecting on what you have said ...

Thanks again for your help ...

Peter

andrewkirk
Homework Helper
Gold Member
And the linearity of $G$ follows from the linearity of the $g_k$. Say we have $f_1=\sum_{k=1}^{t_1}\epsilon_{1k}(v_{1k}),f_2=\sum_{k=1}^{t_2}\epsilon_{2k}(v_{2k})$ where $spt(f_1)=a_{11},...,a_{1t_1},spt(f_2)=a_{21},...,a_{2t_2}$
so that $spt(f_1+f_2)\subseteq spt(f_1)\cup spt(f_2)=i_1...i_t$ ($t\leq t_1+t_2$) so that we can write
$$f_1=\sum_{k=1}^{t}\epsilon_{3k}(u_{1k}),f_2=\sum_{k=1}^{t}\epsilon_{3k}(u_{2k})$$
where, for $r\in\{1,2\}$, $u_{rk}=v_{rj}$ for the $j$ such that $a_{rj}=i_k$ if $i_k\in spt(f_r)$ and otherwise $u_{rk}=0$.

Then we have
$$G(f_1+f_2)= G\left(\sum_{k=1}^{t}\epsilon_{3k}(u_{1k})+\sum_{k=1}^{t}\epsilon_{3k}(u_{2k})\right) =\sum_{j=1}^t g_{i_j}\left(\sum_{k=1}^{t}\epsilon_{3k}(u_{1k})(i_j)+\sum_{k=1}^{t}\epsilon_{3k}(u_{2k})(i_j)\right)$$

$$=\sum_{j=1}^t g_{i_j}\left(\sum_{k=1}^{t}\epsilon_{3k}(u_{1k})(i_j)\right)+ \sum_{j=1}^t g_{i_j}\left(\sum_{k=1}^{t}\epsilon_{3k}(u_{2k})(i_j)\right)$$

$$=\sum_{j=1}^t g_{i_j}\left(f_1(i_j)\right)+ \sum_{j=1}^t g_{i_j}\left(f_2(i_j)\right) =G(f_1)+G(f_2)$$

Proving that $G(\lambda f)=\lambda G(f)$ follows the general pattern of this proof but is much easier.

Math Amateur
andrewkirk
Homework Helper
Gold Member
Finally, uniqueness. We use the fact that ##\{\epsilon_i(v_i)\ :\ i\in I\wedge v_i\in V_i\}## is a spanning set for ##V##. I used that in the first line of the previous post but omitted to point that out. The proof is straightforward, based on the fact that all elements of ##V## have finite support.

Say we have another linear map ##G':V\to W## such that ##\forall i\in I:\ G'\circ \epsilon_i =g_i##. Then, for any ##f\in V##, writing ##f=\sum_{k=1}^t \epsilon_k(v_k)##, we have

$$G'(f)=G'\left(\sum_{k=1}^t \epsilon_k(v_k)\right) =\sum_{k=1}^t G'\circ \epsilon_k(v_k) =\sum_{k=1}^t g_k(v_k) =\sum_{k=1}^t G\circ \epsilon_k(v_k) =G\left(\sum_{k=1}^t \epsilon_k(v_k)\right)=G(f)$$

So ##G'=G##.

Gold Member
We are given that ##spt(f)=\{i_1,....,i_t\}##. That means that ##\exists v_{i_1}\in V_{i_1},...,v_{i_t}\in V_{i_t}## such that
$$f=\epsilon_{i_1}(v_{i_1})+...+\epsilon_{i_t}(v_{i_t})$$

Then for ##k\in \{1,...,t\}## we have, by replacing ##f## by ##\epsilon_{i}(v_{i})## in the definition of ##G(f)##:

$$G\circ\epsilon_{i}(v_{i})\equiv G(\epsilon_{i}(v_{i}))\equiv\sum_{j=1}^t g_{i_j}(\epsilon_{i}(v_{i})(i_j))$$

In your earlier thread ##\epsilon_k## was defined as the function from ##V_k## to the direct product ##V## that maps ##v_k## to
##(0,0,.....0,v_k,0,....,0)## where ##v_k## is in the ##k##th position. That only covers finite direct sums. However it looks from the above as though Cooperstein was - between the two sections you quoted - moved on to defining and allowing infinite direct sums (because of his use of an index set ##I## of unspecified cardinality, rather than just labelling the component spaces as ##V_1## to ##V_n##). That means that ##\epsilon_k## needs an appropriately modified definition. I'm guessing the definition he's using is something like that ##\epsilon_i:V_i\to V## such that ##\epsilon_i(v_i)(j)## is zero for all ##j\in I## except ##i##, for which it gives ##v_i##.

Applying that to the above equation, we have

$$G\circ\epsilon_{i}(v_{i})\equiv \sum_{j=1}^t g_{i_j}(\epsilon_{i}(v_{i})(i_j)) =g_{i}(v_i)$$
as required.

Hi Andrew,

Thanks again for your help ...

Just a couple of clarifications, though ...

1. I know that ##f## has finite support which means ##f## is non-zero at only finite ##i \in I## ... but ... I cannot follow how/why we have

$$f=\epsilon_{i_1}(v_{i_1})+...+\epsilon_{i_t}(v_{i_t})$$

Could you please explain ( perhaps, if you would, slowly and in detail ,,, ) why/how this is true ... and maybe what it means ..

2. You write:

" ... ... something like that ##\epsilon_i:V_i\to V## such that ##\epsilon_i(v_i)(j)## is zero for all ##j\in I## except ##i##, for which it gives ##v_i##. ... ... "

##\epsilon_i## has domain ##V_i## and so I understand an expression like ##\epsilon_i (v_i)## ... but your expression ##\epsilon_i(v_i)(j)## has two arguments, namely ##v_i## and ##j## ... ??? ... can you explain what is going on ...

Sorry to be slow and perhaps over-careful ... but I am trying to ensure that I fully understand the material ...

Peter

andrewkirk
Homework Helper
Gold Member
1. I know that ##f## has finite support which means ##f## is non-zero at only finite ##i \in I## ... but ... I cannot follow how/why we have

$$f=\epsilon_{i_1}(v_{i_1})+...+\epsilon_{i_t}(v_{i_t})$$

Could you please explain ( perhaps, if you would, slowly and in detail ,,, ) why/how this is true ... and maybe what it means ..
It comes from the adaptation of the projection functions ##\pi_i:V\to V_i##, defined in your earlier thread, to the infinite-dimensional case. A little reflection shows that the natural adaptation (which may perhaps be in the intervening passages of Cooperstein) is to define ##\pi_i## by ##\pi_i(f)\equiv f(i)##.

There is a little work to be done to re-prove (a) and (b) from your External Direct Sum thread for the infinite-dimensional case (although I note that Cooperstein didn't even bother proving them in the finite-dimensional case. I think he's a bit slack.), but it should be pretty straightforward.

Taking that as read, we proceed as follows:

##f:I\to \bigcup_{i\in I}V_i## has finite support, so let the support be ##i_1,...,i_t## and let ##v_{i_k}\equiv f(i_k)##.

Next, we use (b)

$$\sum_{i\in I}\epsilon_i\circ \pi_i=I_V$$

to get

$$f=I_Vf=\sum_{i\in I}\epsilon_i\circ \pi_i(f) =\sum_{i\in I}\epsilon_i\left(\pi_i(f)\right) =\sum_{i\in I}\epsilon_i\left(f(i)\right)$$
Note that, by the linearity of ##\epsilon_i##, the elements of this last sum are all zero except when ##i\in\{i_1,...,i_t\}##, so we have

$$f=\sum_{k=1}^t\epsilon_{i_k}\left(f(i_k)\right) =\sum_{k=1}^t\epsilon_{i_k}\left(v_{i_k}\right)$$

as required.
2. You write:

" ... ... something like that ##\epsilon_i:V_i\to V## such that ##\epsilon_i(v_i)(j)## is zero for all ##j\in I## except ##i##, for which it gives ##v_i##. ... ... "

##\epsilon_i## has domain ##V_i## and so I understand an expression like ##\epsilon_i (v_i)## ... but your expression ##\epsilon_i(v_i)(j)## has two arguments, namely ##v_i## and ##j## ... ??? ... can you explain what is going on ...
##\epsilon_i(v_i)## is an element of the direct sum ##V##. Recall that the elements of the direct sum are functions from the index set ##I## to ##\bigcup_{i\in I}V_i##. So ##\epsilon_i(v_i)## is such a function, and can thus be applied to an element ##j## of ##I##. When we do this, we write it as ##\epsilon_i(v_i)(j)##. It can help avoid confusion to write this as ##\left(\epsilon_i(v_i)\right)(j)##

Math Amateur
Gold Member
We are given that ##spt(f)=\{i_1,....,i_t\}##. That means that ##\exists v_{i_1}\in V_{i_1},...,v_{i_t}\in V_{i_t}## such that
$$f=\epsilon_{i_1}(v_{i_1})+...+\epsilon_{i_t}(v_{i_t})$$

Then for ##k\in \{1,...,t\}## we have, by replacing ##f## by ##\epsilon_{i}(v_{i})## in the definition of ##G(f)##:

$$G\circ\epsilon_{i}(v_{i})\equiv G(\epsilon_{i}(v_{i}))\equiv\sum_{j=1}^t g_{i_j}(\epsilon_{i}(v_{i})(i_j))$$

In your earlier thread ##\epsilon_k## was defined as the function from ##V_k## to the direct product ##V## that maps ##v_k## to
##(0,0,.....0,v_k,0,....,0)## where ##v_k## is in the ##k##th position. That only covers finite direct sums. However it looks from the above as though Cooperstein was - between the two sections you quoted - moved on to defining and allowing infinite direct sums (because of his use of an index set ##I## of unspecified cardinality, rather than just labelling the component spaces as ##V_1## to ##V_n##). That means that ##\epsilon_k## needs an appropriately modified definition. I'm guessing the definition he's using is something like that ##\epsilon_i:V_i\to V## such that ##\epsilon_i(v_i)(j)## is zero for all ##j\in I## except ##i##, for which it gives ##v_i##.

Applying that to the above equation, we have

$$G\circ\epsilon_{i}(v_{i})\equiv \sum_{j=1}^t g_{i_j}(\epsilon_{i}(v_{i})(i_j)) =g_{i}(v_i)$$
as required.

Hi Andrew ... thanks again for the help ...

But ... just a clarification ... ...

You write:

"... ...
We are given that ##spt(f)=\{i_1,....,i_t\}##. That means that ##\exists v_{i_1}\in V_{i_1},...,v_{i_t}\in V_{i_t}## such that
$$f=\epsilon_{i_1}(v_{i_1})+...+\epsilon_{i_t}(v_{i_t})$$ ... ... "

and you follow this by writing ... :

" ... ... Then for ##k\in \{1,...,t\}## we have, by replacing ##f## by ##\epsilon_{i}(v_{i})## in the definition of ##G(f)##:"

I do not follow this ... shouldn't you be replacing f by $$\epsilon_{i_1}(v_{i_1})+...+\epsilon_{i_t}(v_{i_t})$$ ... ... ... ???

Can you explain ...

Peter

andrewkirk
Homework Helper
Gold Member
you follow this by writing ... :

" ... ... Then for ##k\in \{1,...,t\}## we have, by replacing ##f## by ##\epsilon_{i}(v_{i})## in the definition of ##G(f)##:"

I do not follow this ... shouldn't you be replacing f by $$\epsilon_{i_1}(v_{i_1})+...+\epsilon_{i_t}(v_{i_t})$$ ... ... ... ???
In the definition of ##G##, the symbol ##f## stands for any arbitrary element of ##V##. Now ##\epsilon_i(v_i)## is such an element and thus can be slotted in as the argument to ##G## in that definition. Perhaps I should have added that ##v_i## is an arbitrary element of ##V_i##. Note that the support of ##\epsilon_i(v_i):I\to \bigcup_{i\in I}V_i## is the singleton ##\{i\}##, so the sum in the RHS of the definition of ##G## only has one element when the argument is ##\epsilon_i(v_i)##, so we can discard the summation symbol.

I think Cooperstein has confused the issue by defining ##f## before he defines ##G##, and thereby implying that ##G## somehow depends on ##f##, which it doesn't! It would be better if he had written the following instead:

Define function ##G:V\to W## such that, ##\forall f\in V##, ##G(f)\equiv \sum_{j\in spt(f)} g_j(f(j))##.

Applying that to ##\epsilon_i(v_i)## then gives

$$G(\epsilon_i(v_i))\equiv \sum_{j\in spt(\epsilon_i(v_i))} g_j((\epsilon_i(v_i))(j)) =\sum_{j\in \{i\}} g_j((\epsilon_i(v_i))(j)) =g_i((\epsilon_i(v_i))(i)) =g_i(v_i)$$

Last edited:
Gold Member
In the definition of ##G##, the symbol ##f## stands for any arbitrary element of ##V##. Now ##\epsilon_i(v_i)## is such an element and thus can be slotted in as the argument to ##G## in that definition. Perhaps I should have added that ##v_i## is an arbitrary element of ##V_i##. Note that the support of ##\epsilon_i(v_i):I\to \bigcup_{i\in I}V_i## is the singleton ##\{i\}##, so the sum in the RHS of the definition of ##G## only has one element when the argument is ##\epsilon_i(v_i)##, so we can discard the summation symbol.

I think Cooperstein has confused the issue by defining ##f## before he defines ##G##, and thereby implying that ##G## somehow depends on ##f##, which it doesn't! It would be better if he had written the following instead:

Define function ##G:V\to W## such that, ##\forall f\in V##, ##G(f)\equiv \sum_{j\in spt(f)} g_j(f(j))##.

Applying that to ##\epsilon_i(v_i)## then gives

$$G(\epsilon_i(v_i))\equiv \sum_{j\in spt(\epsilon_i(v_i))} g_j((\epsilon_i(v_i))(j)) =\sum_{j\in \{i\}} g_j((\epsilon_i(v_i))(j)) =g_i((\epsilon_i(v_i))(i)) =g_i(v_i)$$

Hi Andrew,

Thanks to your posts I now have a much better understanding of what is going on ...

But just one further (minor) issue ...

You write:

" ... ... In the definition of ##G##, the symbol ##f## stands for any arbitrary element of ##V##. Now ##\epsilon_i(v_i)## is such an element and thus can be slotted in as the argument to ##G## in that definition. ... ... "

But ... is ##\epsilon_i(v_i)## really an arbitrary element? ... ... it has the special form of being an element with support equal to a one element set ... shouldn't we be taking a general element - that is an element with support equal to an n-element set where n is any integer ... ...

Can you help clarify this issue?

Peter

andrewkirk
Homework Helper
Gold Member
You write:

" ... ... In the definition of ##G##, the symbol ##f## stands for any arbitrary element of ##V##. Now ##\epsilon_i(v_i)## is such an element and thus can be slotted in as the argument to ##G## in that definition. ... ... "

But ... is ##\epsilon_i(v_i)## really an arbitrary element? ... ... it has the special form of being an element with support equal to a one element set ... shouldn't we be taking a general element - that is an element with support equal to an n-element set where n is any integer ... ...
##f## stands for the arbitrary element of ##V##, not ##\epsilon_i(v_i)##. The latter is a specific element.

What we are doing is substituting a specific element of ##V## for the arbitrary element ##f##, in the first-order logical formula:

$$\forall f\in V:\ G(f)= \sum_{j\in spt(f)} g_j(f(j))$$

This is the type of operation enabled by the axiom schema of substitution (aka instantiation) which is Q5 in this axiomatisation of first order logic. The universal quantifier ##\forall## is what enables this substitution.

It's the same as if we take the formula ##\forall x\in\mathbb{R}:\ x^2\geq 0## and substitute the specific element -2 for the arbitrary element ##x##. That gives us the valid formula ##(-2)^2\geq 0##.

Math Amateur