Tensor Algebras - Cooperstein Theorem 10.8

In summary: PeterHi Peter, no problem.1. Since all elements of ##V## have finite support, we know that we can write f as a finite sum of elements in the basis. What the sentence quoted about the definition of ##\epsilon_k## says is that every element ##v_k## of the basis is mapped to a unique element of the direct product. So if ##i_k## is in the support of ##f##, the term in the definition of ##f## corresponding to the basis element ##v_{i_k}## will be non-zero, and in fact equal to ##f(i_k)##. This is where the ##f(i_j)## come from in the definition of ##G(f)##.
  • #1
Math Amateur
Gold Member
MHB
3,990
48
I am reading Bruce N. Coopersteins book: Advanced Linear Algebra (Second Edition) ... ...

I am focused on Section 10.3 The Tensor Algebra ... ...

I need help in order to get a basic understanding of Theorem 10.8 which is a Theorem concerning the direct sum of a family of subspaces as a solution to a UMP ... the theorem is preliminary to tensor algebras ...

I am struggling to understand how the function ##G## as defined in the proof actually gives us ##G \circ \epsilon_i = g_i## ... ... if I see the explicit mechanics of this I may understand the functions involved better ... and hence the whole theorem better ...Theorem 10.8 (plus some necessary definitions and explanations) reads as follows:
?temp_hash=418f57e2318ea815d5165ddfe81dbe73.png

In the above we read the following:" ... ... Then define##G(f) = \sum_{j = 1}^t g_{i_j} (f(i_j)) ##We leave it to the reader to show that this is a linear transformation and if ##G## exists then it must be defined in this way, that is, it is unique. ... ... "Can someone please help me to ...

(1) demonstrate explicitly, clearly and in detail that ##G(f) = \sum_{j = 1}^t g_{i_j} (f(i_j)) ## satisfies ##G \circ \epsilon_i = g_i## (if I understand the detail of this then I may well understand the functions involved better, and in turn, understand the theorem better ...)(2) show that ##G## is a linear transformation and, further, that if ##G## exists then it must be defined in this way, that is, it is unique.
Hope that someone can help ...

Peter
 

Attachments

  • Cooperstein - Section 10.3 - page 364     ....png
    Cooperstein - Section 10.3 - page 364 ....png
    37.9 KB · Views: 595
Physics news on Phys.org
  • #2
We are given that ##spt(f)=\{i_1,...,i_t\}##. That means that ##\exists v_{i_1}\in V_{i_1},...,v_{i_t}\in V_{i_t}## such that
$$f=\epsilon_{i_1}(v_{i_1})+...+\epsilon_{i_t}(v_{i_t})$$

Then for ##k\in \{1,...,t\}## we have, by replacing ##f## by ##\epsilon_{i}(v_{i})## in the definition of ##G(f)##:

$$G\circ\epsilon_{i}(v_{i})\equiv G(\epsilon_{i}(v_{i}))\equiv\sum_{j=1}^t g_{i_j}(\epsilon_{i}(v_{i})(i_j))$$

In your earlier thread ##\epsilon_k## was defined as the function from ##V_k## to the direct product ##V## that maps ##v_k## to
##(0,0,...0,v_k,0,...,0)## where ##v_k## is in the ##k##th position. That only covers finite direct sums. However it looks from the above as though Cooperstein was - between the two sections you quoted - moved on to defining and allowing infinite direct sums (because of his use of an index set ##I## of unspecified cardinality, rather than just labelling the component spaces as ##V_1## to ##V_n##). That means that ##\epsilon_k## needs an appropriately modified definition. I'm guessing the definition he's using is something like that ##\epsilon_i:V_i\to V## such that ##\epsilon_i(v_i)(j)## is zero for all ##j\in I## except ##i##, for which it gives ##v_i##.

Applying that to the above equation, we have

$$G\circ\epsilon_{i}(v_{i})\equiv \sum_{j=1}^t g_{i_j}(\epsilon_{i}(v_{i})(i_j))
=g_{i}(v_i)$$
as required.
 
Last edited:
  • Like
Likes Math Amateur
  • #3
Thanks so much, Andrew ...

Just working through your post and reflecting on what you have said ...

Thanks again for your help ...

Peter
 
  • #4
And the linearity of [itex]G[/itex] follows from the linearity of the [itex]g_k[/itex]. Say we have [itex]f_1=\sum_{k=1}^{t_1}\epsilon_{1k}(v_{1k}),f_2=\sum_{k=1}^{t_2}\epsilon_{2k}(v_{2k})[/itex] where [itex]spt(f_1)=a_{11},...,a_{1t_1},spt(f_2)=a_{21},...,a_{2t_2}[/itex]
so that [itex]spt(f_1+f_2)\subseteq spt(f_1)\cup spt(f_2)=i_1...i_t[/itex] ([itex]t\leq t_1+t_2[/itex]) so that we can write
$$f_1=\sum_{k=1}^{t}\epsilon_{3k}(u_{1k}),f_2=\sum_{k=1}^{t}\epsilon_{3k}(u_{2k})$$
where, for [itex]r\in\{1,2\}[/itex], [itex]u_{rk}=v_{rj}[/itex] for the [itex]j[/itex] such that [itex]a_{rj}=i_k[/itex] if [itex]i_k\in spt(f_r)[/itex] and otherwise [itex]u_{rk}=0[/itex].

Then we have
$$G(f_1+f_2)=
G\left(\sum_{k=1}^{t}\epsilon_{3k}(u_{1k})+\sum_{k=1}^{t}\epsilon_{3k}(u_{2k})\right)
=\sum_{j=1}^t g_{i_j}\left(\sum_{k=1}^{t}\epsilon_{3k}(u_{1k})(i_j)+\sum_{k=1}^{t}\epsilon_{3k}(u_{2k})(i_j)\right)$$

$$=\sum_{j=1}^t g_{i_j}\left(\sum_{k=1}^{t}\epsilon_{3k}(u_{1k})(i_j)\right)+
\sum_{j=1}^t g_{i_j}\left(\sum_{k=1}^{t}\epsilon_{3k}(u_{2k})(i_j)\right)
$$

$$=\sum_{j=1}^t g_{i_j}\left(f_1(i_j)\right)+
\sum_{j=1}^t g_{i_j}\left(f_2(i_j)\right)
=G(f_1)+G(f_2)$$

Proving that [itex]G(\lambda f)=\lambda G(f)[/itex] follows the general pattern of this proof but is much easier.
 
  • Like
Likes Math Amateur
  • #5
Finally, uniqueness. We use the fact that ##\{\epsilon_i(v_i)\ :\ i\in I\wedge v_i\in V_i\}## is a spanning set for ##V##. I used that in the first line of the previous post but omitted to point that out. The proof is straightforward, based on the fact that all elements of ##V## have finite support.

Say we have another linear map ##G':V\to W## such that ##\forall i\in I:\ G'\circ \epsilon_i =g_i##. Then, for any ##f\in V##, writing ##f=\sum_{k=1}^t \epsilon_k(v_k)##, we have

$$G'(f)=G'\left(\sum_{k=1}^t \epsilon_k(v_k)\right)
=\sum_{k=1}^t G'\circ \epsilon_k(v_k)
=\sum_{k=1}^t g_k(v_k)
=\sum_{k=1}^t G\circ \epsilon_k(v_k)
=G\left(\sum_{k=1}^t \epsilon_k(v_k)\right)=G(f)
$$

So ##G'=G##.
 
  • #6
andrewkirk said:
We are given that ##spt(f)=\{i_1,...,i_t\}##. That means that ##\exists v_{i_1}\in V_{i_1},...,v_{i_t}\in V_{i_t}## such that
$$f=\epsilon_{i_1}(v_{i_1})+...+\epsilon_{i_t}(v_{i_t})$$

Then for ##k\in \{1,...,t\}## we have, by replacing ##f## by ##\epsilon_{i}(v_{i})## in the definition of ##G(f)##:

$$G\circ\epsilon_{i}(v_{i})\equiv G(\epsilon_{i}(v_{i}))\equiv\sum_{j=1}^t g_{i_j}(\epsilon_{i}(v_{i})(i_j))$$

In your earlier thread ##\epsilon_k## was defined as the function from ##V_k## to the direct product ##V## that maps ##v_k## to
##(0,0,...0,v_k,0,...,0)## where ##v_k## is in the ##k##th position. That only covers finite direct sums. However it looks from the above as though Cooperstein was - between the two sections you quoted - moved on to defining and allowing infinite direct sums (because of his use of an index set ##I## of unspecified cardinality, rather than just labelling the component spaces as ##V_1## to ##V_n##). That means that ##\epsilon_k## needs an appropriately modified definition. I'm guessing the definition he's using is something like that ##\epsilon_i:V_i\to V## such that ##\epsilon_i(v_i)(j)## is zero for all ##j\in I## except ##i##, for which it gives ##v_i##.

Applying that to the above equation, we have

$$G\circ\epsilon_{i}(v_{i})\equiv \sum_{j=1}^t g_{i_j}(\epsilon_{i}(v_{i})(i_j))
=g_{i}(v_i)$$
as required.
Hi Andrew,

Thanks again for your help ...

Just a couple of clarifications, though ...

1. I know that ##f## has finite support which means ##f## is non-zero at only finite ##i \in I## ... but ... I cannot follow how/why we have

$$f=\epsilon_{i_1}(v_{i_1})+...+\epsilon_{i_t}(v_{i_t})$$Could you please explain ( perhaps, if you would, slowly and in detail :frown: ,,, ) why/how this is true ... and maybe what it means ..
2. You write:

" ... ... something like that ##\epsilon_i:V_i\to V## such that ##\epsilon_i(v_i)(j)## is zero for all ##j\in I## except ##i##, for which it gives ##v_i##. ... ... "##\epsilon_i## has domain ##V_i## and so I understand an expression like ##\epsilon_i (v_i)## ... but your expression ##\epsilon_i(v_i)(j)## has two arguments, namely ##v_i## and ##j## ... ? ... can you explain what is going on ...Sorry to be slow and perhaps over-careful ... but I am trying to ensure that I fully understand the material ...

Peter
 
  • #7
Math Amateur said:
1. I know that ##f## has finite support which means ##f## is non-zero at only finite ##i \in I## ... but ... I cannot follow how/why we have

$$f=\epsilon_{i_1}(v_{i_1})+...+\epsilon_{i_t}(v_{i_t})$$Could you please explain ( perhaps, if you would, slowly and in detail :frown: ,,, ) why/how this is true ... and maybe what it means ..
It comes from the adaptation of the projection functions ##\pi_i:V\to V_i##, defined in your earlier thread, to the infinite-dimensional case. A little reflection shows that the natural adaptation (which may perhaps be in the intervening passages of Cooperstein) is to define ##\pi_i## by ##\pi_i(f)\equiv f(i)##.

There is a little work to be done to re-prove (a) and (b) from your External Direct Sum thread for the infinite-dimensional case (although I note that Cooperstein didn't even bother proving them in the finite-dimensional case. I think he's a bit slack.), but it should be pretty straightforward.

Taking that as read, we proceed as follows:

##f:I\to \bigcup_{i\in I}V_i## has finite support, so let the support be ##i_1,...,i_t## and let ##v_{i_k}\equiv f(i_k)##.

Next, we use (b)

$$\sum_{i\in I}\epsilon_i\circ \pi_i=I_V$$

to get

$$f=I_Vf=\sum_{i\in I}\epsilon_i\circ \pi_i(f)
=\sum_{i\in I}\epsilon_i\left(\pi_i(f)\right)
=\sum_{i\in I}\epsilon_i\left(f(i)\right)$$
Note that, by the linearity of ##\epsilon_i##, the elements of this last sum are all zero except when ##i\in\{i_1,...,i_t\}##, so we have

$$f=\sum_{k=1}^t\epsilon_{i_k}\left(f(i_k)\right)
=\sum_{k=1}^t\epsilon_{i_k}\left(v_{i_k}\right)$$

as required.
2. You write:

" ... ... something like that ##\epsilon_i:V_i\to V## such that ##\epsilon_i(v_i)(j)## is zero for all ##j\in I## except ##i##, for which it gives ##v_i##. ... ... "

##\epsilon_i## has domain ##V_i## and so I understand an expression like ##\epsilon_i (v_i)## ... but your expression ##\epsilon_i(v_i)(j)## has two arguments, namely ##v_i## and ##j## ... ? ... can you explain what is going on ...
##\epsilon_i(v_i)## is an element of the direct sum ##V##. Recall that the elements of the direct sum are functions from the index set ##I## to ##\bigcup_{i\in I}V_i##. So ##\epsilon_i(v_i)## is such a function, and can thus be applied to an element ##j## of ##I##. When we do this, we write it as ##\epsilon_i(v_i)(j)##. It can help avoid confusion to write this as ##\left(\epsilon_i(v_i)\right)(j)##
 
  • Like
Likes Math Amateur
  • #8
andrewkirk said:
We are given that ##spt(f)=\{i_1,...,i_t\}##. That means that ##\exists v_{i_1}\in V_{i_1},...,v_{i_t}\in V_{i_t}## such that
$$f=\epsilon_{i_1}(v_{i_1})+...+\epsilon_{i_t}(v_{i_t})$$

Then for ##k\in \{1,...,t\}## we have, by replacing ##f## by ##\epsilon_{i}(v_{i})## in the definition of ##G(f)##:

$$G\circ\epsilon_{i}(v_{i})\equiv G(\epsilon_{i}(v_{i}))\equiv\sum_{j=1}^t g_{i_j}(\epsilon_{i}(v_{i})(i_j))$$

In your earlier thread ##\epsilon_k## was defined as the function from ##V_k## to the direct product ##V## that maps ##v_k## to
##(0,0,...0,v_k,0,...,0)## where ##v_k## is in the ##k##th position. That only covers finite direct sums. However it looks from the above as though Cooperstein was - between the two sections you quoted - moved on to defining and allowing infinite direct sums (because of his use of an index set ##I## of unspecified cardinality, rather than just labelling the component spaces as ##V_1## to ##V_n##). That means that ##\epsilon_k## needs an appropriately modified definition. I'm guessing the definition he's using is something like that ##\epsilon_i:V_i\to V## such that ##\epsilon_i(v_i)(j)## is zero for all ##j\in I## except ##i##, for which it gives ##v_i##.

Applying that to the above equation, we have

$$G\circ\epsilon_{i}(v_{i})\equiv \sum_{j=1}^t g_{i_j}(\epsilon_{i}(v_{i})(i_j))
=g_{i}(v_i)$$
as required.
Hi Andrew ... thanks again for the help ...

But ... just a clarification ... ...

You write:

"... ...
We are given that ##spt(f)=\{i_1,...,i_t\}##. That means that ##\exists v_{i_1}\in V_{i_1},...,v_{i_t}\in V_{i_t}## such that
$$f=\epsilon_{i_1}(v_{i_1})+...+\epsilon_{i_t}(v_{i_t})$$ ... ... "

and you follow this by writing ... :

" ... ... Then for ##k\in \{1,...,t\}## we have, by replacing ##f## by ##\epsilon_{i}(v_{i})## in the definition of ##G(f)##:"I do not follow this ... shouldn't you be replacing f by $$\epsilon_{i_1}(v_{i_1})+...+\epsilon_{i_t}(v_{i_t})$$ ... ... ... ?

Can you explain ...

Peter
 
  • #9
Math Amateur said:
you follow this by writing ... :

" ... ... Then for ##k\in \{1,...,t\}## we have, by replacing ##f## by ##\epsilon_{i}(v_{i})## in the definition of ##G(f)##:"I do not follow this ... shouldn't you be replacing f by $$\epsilon_{i_1}(v_{i_1})+...+\epsilon_{i_t}(v_{i_t})$$ ... ... ... ?
In the definition of ##G##, the symbol ##f## stands for any arbitrary element of ##V##. Now ##\epsilon_i(v_i)## is such an element and thus can be slotted in as the argument to ##G## in that definition. Perhaps I should have added that ##v_i## is an arbitrary element of ##V_i##. Note that the support of ##\epsilon_i(v_i):I\to \bigcup_{i\in I}V_i## is the singleton ##\{i\}##, so the sum in the RHS of the definition of ##G## only has one element when the argument is ##\epsilon_i(v_i)##, so we can discard the summation symbol.

I think Cooperstein has confused the issue by defining ##f## before he defines ##G##, and thereby implying that ##G## somehow depends on ##f##, which it doesn't! It would be better if he had written the following instead:

Define function ##G:V\to W## such that, ##\forall f\in V##, ##G(f)\equiv \sum_{j\in spt(f)} g_j(f(j))##.

Applying that to ##\epsilon_i(v_i)## then gives

$$G(\epsilon_i(v_i))\equiv \sum_{j\in spt(\epsilon_i(v_i))} g_j((\epsilon_i(v_i))(j))
=\sum_{j\in \{i\}} g_j((\epsilon_i(v_i))(j))
=g_i((\epsilon_i(v_i))(i))
=g_i(v_i)
$$
 
Last edited:
  • #10
andrewkirk said:
In the definition of ##G##, the symbol ##f## stands for any arbitrary element of ##V##. Now ##\epsilon_i(v_i)## is such an element and thus can be slotted in as the argument to ##G## in that definition. Perhaps I should have added that ##v_i## is an arbitrary element of ##V_i##. Note that the support of ##\epsilon_i(v_i):I\to \bigcup_{i\in I}V_i## is the singleton ##\{i\}##, so the sum in the RHS of the definition of ##G## only has one element when the argument is ##\epsilon_i(v_i)##, so we can discard the summation symbol.

I think Cooperstein has confused the issue by defining ##f## before he defines ##G##, and thereby implying that ##G## somehow depends on ##f##, which it doesn't! It would be better if he had written the following instead:

Define function ##G:V\to W## such that, ##\forall f\in V##, ##G(f)\equiv \sum_{j\in spt(f)} g_j(f(j))##.

Applying that to ##\epsilon_i(v_i)## then gives

$$G(\epsilon_i(v_i))\equiv \sum_{j\in spt(\epsilon_i(v_i))} g_j((\epsilon_i(v_i))(j))
=\sum_{j\in \{i\}} g_j((\epsilon_i(v_i))(j))
=g_i((\epsilon_i(v_i))(i))
=g_i(v_i)
$$
Hi Andrew,

Thanks to your posts I now have a much better understanding of what is going on ...

But just one further (minor) issue ...

You write:

" ... ... In the definition of ##G##, the symbol ##f## stands for any arbitrary element of ##V##. Now ##\epsilon_i(v_i)## is such an element and thus can be slotted in as the argument to ##G## in that definition. ... ... "

But ... is ##\epsilon_i(v_i)## really an arbitrary element? ... ... it has the special form of being an element with support equal to a one element set ... shouldn't we be taking a general element - that is an element with support equal to an n-element set where n is any integer ... ...

Can you help clarify this issue?

Peter
 
  • #11
Math Amateur said:
You write:

" ... ... In the definition of ##G##, the symbol ##f## stands for any arbitrary element of ##V##. Now ##\epsilon_i(v_i)## is such an element and thus can be slotted in as the argument to ##G## in that definition. ... ... "

But ... is ##\epsilon_i(v_i)## really an arbitrary element? ... ... it has the special form of being an element with support equal to a one element set ... shouldn't we be taking a general element - that is an element with support equal to an n-element set where n is any integer ... ...
##f## stands for the arbitrary element of ##V##, not ##\epsilon_i(v_i)##. The latter is a specific element.

What we are doing is substituting a specific element of ##V## for the arbitrary element ##f##, in the first-order logical formula:

$$\forall f\in V:\ G(f)= \sum_{j\in spt(f)} g_j(f(j))$$

This is the type of operation enabled by the axiom schema of substitution (aka instantiation) which is Q5 in this axiomatisation of first order logic. The universal quantifier ##\forall## is what enables this substitution.

It's the same as if we take the formula ##\forall x\in\mathbb{R}:\ x^2\geq 0## and substitute the specific element -2 for the arbitrary element ##x##. That gives us the valid formula ##(-2)^2\geq 0##.
 
  • Like
Likes Math Amateur

1. What is a tensor algebra?

A tensor algebra is a mathematical structure that is used to describe and manipulate tensors, which are multi-dimensional arrays of numbers or symbols. It is a generalization of vector spaces and can be used to represent various physical quantities, such as forces, velocities, and electromagnetic fields.

2. What is the Cooperstein Theorem 10.8?

The Cooperstein Theorem 10.8 is a result in the field of tensor algebra that states that for a finite-dimensional vector space V over a field F, the tensor algebra of V is isomorphic to the symmetric algebra of V. In simpler terms, it shows that the tensor algebra of a vector space can be represented as a symmetric polynomial algebra.

3. How is the Cooperstein Theorem 10.8 useful?

The Cooperstein Theorem 10.8 is useful in various areas of mathematics and physics, as it provides a way to simplify complicated tensor expressions. It also helps in understanding the relationship between tensor algebras and symmetric polynomial algebras, which have important applications in fields such as quantum mechanics and differential geometry.

4. What are some applications of tensor algebras?

Tensor algebras have many applications in physics, engineering, and computer science. They are used to study physical phenomena, such as fluid dynamics and electromagnetism, and in the design of algorithms for data analysis and machine learning. They also play a crucial role in general relativity and quantum field theory.

5. Are there any limitations to the Cooperstein Theorem 10.8?

While the Cooperstein Theorem 10.8 is a powerful tool in tensor algebra, it has some limitations. It only applies to finite-dimensional vector spaces, and the isomorphism between the tensor algebra and symmetric algebra is not always explicit. In some cases, it may also be necessary to use other techniques or theorems in conjunction with Theorem 10.8 to solve problems involving tensor algebra.

Similar threads

  • Linear and Abstract Algebra
Replies
7
Views
2K
  • Linear and Abstract Algebra
Replies
10
Views
359
  • Linear and Abstract Algebra
Replies
2
Views
910
  • Linear and Abstract Algebra
Replies
5
Views
2K
  • Linear and Abstract Algebra
Replies
1
Views
1K
  • Linear and Abstract Algebra
Replies
2
Views
1K
Replies
2
Views
1K
  • Linear and Abstract Algebra
Replies
2
Views
1K
  • Linear and Abstract Algebra
Replies
1
Views
978
  • Linear and Abstract Algebra
Replies
1
Views
824
Back
Top