Basis of a Tensor Product - Cooperstein - Theorem 10.2

In summary: R}## is certainly a vector space over ##\mathbb{R}##. But what is the vector space structure of ##V\times W##? You've described a way to make ##V\times W## into a vector space, but you haven't explained what that vector space is. What are the elements of that vector space? What does it look like when you take spans?I guess the main point is: we have to be more careful in our language. You can create a vector space out of ##V\times W##, but ##V\times W## is not itself a vector space. Just like you can create a number out of a string of digits, but a
  • #1
Math Amateur
Gold Member
MHB
3,990
48
I am reading Bruce N. Coopersteins book: Advanced Linear Algebra (Second Edition) ... ...

I am focused on Section 10.1 Introduction to Tensor Products ... ...

I need help with an aspect of Theorem 10.2 regarding the basis of a tensor product ... ...Theorem 10.2 reads as follows:
?temp_hash=3220d3c95ac67a8adb878b71768f0886.png

I do not follow the proof of this Theorem as it appears to me that [itex]Z'[/itex], the space spanned by [itex]X'[/itex] is actually equal to [itex]Z[/itex] ... but the Theorem implies it is a proper subset ... otherwise why mention [itex]Z'[/itex] at all ...I will explain my confusion by taking a simple example involving two vector spaces [itex]V[/itex] and [itex]W[/itex] where [itex]V[/itex] has basis [itex]B_1 = \{ v_1, v_2, v_3 \}[/itex] and [itex]W[/itex] has basis [itex]B_2 = \{ w_1, w_2 \} [/itex] ... ...So ... following the proof we set [itex]B = \{ v \otimes w \ | \ v \in B_1, w \in B_2 \}[/itex] ... ...Elements of [itex]B[/itex] then are ... ... [itex]v_1 \otimes w_1, \ v_1 \otimes w_2, \ v_2 \otimes w_1, \ v_2 \otimes w_2, \ v_3 \otimes w_1, \ v_3 \otimes w_2[/itex] ... ...and we have[itex]X' = B_1 \times B_2 [/itex]

[itex]= \{ (v_1, w_1) , \ (v_1, w_2) , \ (v_2, w_1) , \ (v_2, w_2) , \ (v_3, w_1) , \ (v_3, w_2) \} [/itex]Now [itex]Z'[/itex] is the subspace of [itex]Z[/itex] spanned by [itex]X'[/itex] ... ...But [itex]B_1[/itex] spans [itex]V[/itex] and [itex]B_2[/itex] spans [itex]W[/itex] ... ... so surely [itex]B_1 \times B_2[/itex] spans [itex]V \times W[/itex] ... ...

... BUT ... this does not seem to be what the Theorem implies (although it is possible under the proof ...)

Can someone please clarify the above for me and critique my example ... how is [itex]Z'[/itex] a proper subset of [itex]Z[/itex] ...?Hope someone can help ...

Peter
 

Attachments

  • Cooperstein - 3 - Theorem 10.2     ....        ....png
    Cooperstein - 3 - Theorem 10.2 .... ....png
    57.5 KB · Views: 521
Physics news on Phys.org
  • #2
Math Amateur said:
But [itex]B_1[/itex] spans [itex]V[/itex] and [itex]B_2[/itex] spans [itex]W[/itex] ... ... so surely [itex]B_1 \times B_2[/itex] spans [itex]V \times W[/itex] ... ...
You didn't say what ##Z## is. See my post about 30 mins ago in your other thread for the formal definition of what I think Cooperstein means it to be. Then ##Z## is not the same as ##V\times W##, so if ##\mathscr{B}_1\times\mathscr{B}_2## did span ##V\times W##, that would not imply anything about spanning ##Z##. But in fact ##V\times W## is not even a vector space, so we cannot meaningfully talk about any set spanning it.

The simplest way to see this is that, in your example, ##\langle 2v_1,0\rangle## and ##2\langle v_1,0\rangle## are different elements of ##Z## because ##\langle 2v_1,0\rangle## and ##\langle v_1,0\rangle## are separate elements of the (uncountably infinite) basis.
EDIT: the above para makes an identification without stating it, that may be misleading. See post 6 below for a correction.

It feels very natural to believe that we can take the 2 outside the angle brackets because, after all, we're dealing with vectors, and can't you always do that with vectors because, you know, linearity? But if we look for a rule that allows us to take the 2 outside the brackets in that case, we don't find one. So inside it has to stay, and the two items remain different.

Also, note that ##Z## is infinite-dimensional, so it cannot be spanned by a finite collection of vectors, which is what the image of ##\mathscr{B}_1\times\mathscr{B}_2## is.

It is only when the quotient is taken, over the subspace of ##Z## generated by all formal sums that we would 'like' to be zero, that we obtain a finite-dimensional space again.
 
Last edited:
  • Like
Likes Math Amateur
  • #3
Andrew,

Thanks so much for your help ... will be working through your post shortly

... sorry I did not present a definition of Z ...

Z is defined in Cooperstein's introduction to Section 10.1 and is used in Theorem 10.1 ... so I am providing relevant text now ...
?temp_hash=d010e2e54835c3c171c6d89d2de5fd44.png

?temp_hash=d010e2e54835c3c171c6d89d2de5fd44.png

?temp_hash=d010e2e54835c3c171c6d89d2de5fd44.png

?temp_hash=d010e2e54835c3c171c6d89d2de5fd44.png
Hope that helps to give a good idea of the definition and characteristics of Z ...

Again, apologies it was not provided ...

Peter
 

Attachments

  • Cooperstein - 1 - Section 10.1 - PART 1     ....png
    Cooperstein - 1 - Section 10.1 - PART 1 ....png
    70.8 KB · Views: 544
  • Cooperstein - 2 - Section 10.1 - PART 2     ....png
    Cooperstein - 2 - Section 10.1 - PART 2 ....png
    38.4 KB · Views: 536
  • Cooperstein - 3 - Section 10.1 - PART 3     ....png
    Cooperstein - 3 - Section 10.1 - PART 3 ....png
    35.3 KB · Views: 558
  • Cooperstein - 4 - Section 10.1 - PART 4     ....png
    Cooperstein - 4 - Section 10.1 - PART 4 ....png
    33.1 KB · Views: 533
  • #4
andrewkirk said:
You didn't say what ##Z## is. See my post about 30 mins ago in your other thread for the formal definition of what I think Cooperstein means it to be. Then ##Z## is not the same as ##V\times W##, so if ##\mathscr{B}_1\times\mathscr{B}_2## did span ##V\times W##, that would not imply anything about spanning ##Z##. But in fact ##V\times W## is not even a vector space, so we cannot meaningfully talk about any set spanning it.

The simplest way to see this is that, in your example, ##\langle 2v_1,0\rangle## and ##2\langle v_1,0\rangle## are different elements of ##Z## because ##\langle 2v_1,0\rangle## and ##\langle v_1,0\rangle## are separate elements of the (uncountably infinite) basis.
It feels very natural to believe that we can take the 2 outside the angle brackets because, after all, we're dealing with vectors, and can't you always do that with vectors because, you know, linearity? But if we look for a rule that allows us to take the 2 outside the brackets in that case, we don't find one. So inside it has to stay, and the two items remain different.

Also, note that ##Z## is infinite-dimensional, so it cannot be spanned by a finite collection of vectors, which is what the image of ##\mathscr{B}_1\times\mathscr{B}_2## is.

It is only when the quotient is taken, over the subspace of ##Z## generated by all formal sums that we would 'like' to be zero, that we obtain a finite-dimensional space again.
Hi Andrew,

Thanks again for your help ...

Still reflecting on your post ... but just a preliminary clarification ...

You write:

" ... ... in fact ##V\times W## is not even a vector space, so we cannot meaningfully talk about any set spanning it... ... "

But surely we can easily define a vector space on ##V \times W## ... ...

... by defining scalar multiplication by ## c(v,w) = (cv, cw) ## [ ... ... or should it be ## c(v,w) = (cv, w) = (v, cw) ## ]

... ... and addition by ##(v_1, w_1) + (v_2, w_2) = (v_1 + v_2, w_1 + w_2)## ... ...

For example when ##V = W = \mathbb{R}## we get the vector space ##\mathbb{R} \times \mathbb{R}##

Can you please comment/explain regarding the above issue ...
Further ... ... I believe ##Z## is a vector space based on ##V \times W## (in this example) ... ... is that correct?

So ... can you elaborate on the character and nature of the elements of ##Z## and on how we may perform arithmetic on them and with them ...

... ... and indeed why/how ##Z## is infinite dimensional ...

I am confused and perplexed regarding the nature of the vector space ##Z## ...

Hope you can help ...

Peter

Peter
 
Last edited:
  • #5
The fact that those products of basis vectors are a basis is seen as follows. it should be clear that they span. to see they are independent it suffices to know the dimension of the tensor product. for this it seems easier to use the fact that linear functions on the tensor product correspond to multilinear functions on the product space. then count the dimension of that space of maps. and since you already have spanning, hence an upper bound on dimension, it suffices to show the dimension of multilinear functions on the spaces is at least the product of their dimensions. for that check that if you have bases of linear functions on each space, then multiplying those together gives a basis of multilinear maps. e.g. the product of two (dual) basic functions fjxfk vanishes on every pair <us,ut> of basis vectors except the pair <uj,uk>.
 
  • #6
@Math Amateur just a quick comment. I committed the crime of unacknowledged identification in my post above, identifying elements of ##Z## with corresponding elements of its base set. When I wrote:
andrewkirk said:
in your example, ##\langle 2v_1,0\rangle## and ##2\langle v_1,0\rangle## are different elements of ##Z## because ##\langle 2v_1,0\rangle## and ##\langle v_1,0\rangle## are separate elements of the (uncountably infinite) basis.
I should have written
andrewkirk said:
in your example, ##\chi_{2v_1}## and ##2\chi_{v_1}## are different elements of ##Z## because ##\chi_{2v_1}## and ##\chi_{v_1}## are separate elements of the (uncountably infinite) basis.

##Z## is uncountably infinite because every vector in the Cartesian product of the ##m## spaces ##V_k## is a separate basis element, and there are uncountably many such vectors. ##\vec v## is a different basis vector from ##2\vec v## and ##u+v## is independent from both ##u## and ##v##. It's because the sums we are making are formal sums, not vector sums.

Gotta run now. Will look at your other questions later.
 
  • #7
Math Amateur said:
But surely we can easily define a vector space on ##V \times W## ... ...

... by defining scalar multiplication by ## c(v,w) = (cv, cw) ## [ ... ... or should it be ## c(v,w) = (cv, w) = (v, cw) ## ]

... ... and addition by ##(v_1, w_1) + (v_2, w_2) = (v_1 + v_2, w_1 + w_2)## ... ...

For example when ##V = W = \mathbb{R}## we get the vector space ##\mathbb{R} \times \mathbb{R}##

Can you please comment/explain regarding the above issue ...
We can make that definition but, until it is made - and Cooperstein doesn't make it - there is no vector space there.
The reason he doesn't make it is that it takes us in the wrong direction - gives a sort of vector space we don't want.
The vector space it will give us if we do things like you indicate is the direct sum, which is written ##V\oplus W##. That is not the sort of vector space we want. It is too small.
 
  • Like
Likes Math Amateur
  • #8
mathwonk said:
The fact that those products of basis vectors are a basis is seen as follows. it should be clear that they span. to see they are independent it suffices to know the dimension of the tensor product. for this it seems easier to use the fact that linear functions on the tensor product correspond to multilinear functions on the product space. then count the dimension of that space of maps. and since you already have spanning, hence an upper bound on dimension, it suffices to show the dimension of multilinear functions on the spaces is at least the product of their dimensions. for that check that if you have bases of linear functions on each space, then multiplying those together gives a basis of multilinear maps. e.g. the product of two (dual) basic functions fjxfk vanishes on every pair <us,ut> of basis vectors except the pair <uj,uk>.

Thanks for the help mathwonk ...

Reading and re-reading your post ... but having a lot of trouble following it because of the level of abstraction ...

Are you able to tackle my questions about about the nature of V x W ... .?

Sorry to be a bit slow ...

Peter
 
  • #9
andrewkirk said:
We can make that definition but, until it is made - and Cooperstein doesn't make it - there is no vector space there.
The reason he doesn't make it is that it takes us in the wrong direction - gives a sort of vector space we don't want.
The vector space it will give us if we do things like you indicate is the direct sum, which is written ##V\oplus W##. That is not the sort of vector space we want. It is too small.

Oh OK! ... that makes things clearer ...

Will keep thinking around why that takes us in the wrong direction ... and why, exactly, we need a bigger space ...

Still puzzling over exactly what Cooperstein did ... and especially the nature of Z ...

Peter
 
Last edited:
  • #10
Math Amateur said:
Oh OK! ... that makes things clearer ...

Will keep thinking around why that takes us in the wrong direction ... and why, exactly, we need a bigger space ...

Still puzzling over the nature of Z ... and exactly what was done ...

Peter
 
  • #11
Math Amateur said:
can you elaborate on the character and nature of the elements of ##Z## and on how we may perform arithmetic on them and with them ...

... ... and indeed why/how ##Z## is infinite dimensional ...
##V\times W## is the set of all ordered pairs of the form ##\langle v,w\rangle##. If the field ##F## is infinite then so will ##V## and ##W## be, and hence ##V\times W## will also have infinite cardinality.

Now ##V\times W## is the base set for the vector space ##Z##, which means that there is a bijection between elements ##\langle v,w\rangle## of ##V\times W## and elements ##\chi_{\langle v,w\rangle}## of ##Z##. Hence, since ##V\times W## has infinite cardinality, so does the corresponding basis of ##Z##. So ##Z## is infinite-dimensional.

##Z## is the set of all maps from ##V\times W## to ##F## with finite support. Consider such a map ##K## that is nonzero only on ##x_1,...,x_g\in V\times W## and gives values ##a_1,...,a_g## at those points. Then we have

$$K=\sum_{k=1}^g a_k\chi_{x_k}$$

This expresses the vector ##K## as a finite sum of basis elements of ##Z##.

The sum we are using here has the following meaning. Where ##f,f_1,f_2:S\to F## are maps from a set ##S##to a field ##F## and ##a\in F##:

(i) the map ##af:S\to F## is defined to be the map that takes ##s\in S## to ##af(s)\in F##; and
(ii) the map ##(f_1+f_2):S\to F## is defined to be the map that takes ##s\in S## to ##f_1(s)+f_2(s)\in F##
 
  • Like
Likes Math Amateur
  • #12
andrewkirk said:
##V\times W## is the set of all ordered pairs of the form ##\langle v,w\rangle##. If the field ##F## is infinite then so will ##V## and ##W## be, and hence ##V\times W## will also have infinite cardinality.

Now ##V\times W## is the base set for the vector space ##Z##, which means that there is a bijection between elements ##\langle v,w\rangle## of ##V\times W## and elements ##\chi_{\langle v,w\rangle}## of ##Z##. Hence, since ##V\times W## has infinite cardinality, so does the corresponding basis of ##Z##. So ##Z## is infinite-dimensional.

##Z## is the set of all maps from ##V\times W## to ##F## with finite support. Consider such a map ##K## that is nonzero only on ##x_1,...,x_g\in V\times W## and gives values ##a_1,...,a_g## at those points. Then we have

$$K=\sum_{k=1}^g a_k\chi_{x_k}$$

This expresses the vector ##K## as a finite sum of basis elements of ##Z##.

The sum we are using here has the following meaning. Where ##f,f_1,f_2:S\to F## are maps from a set ##S##to a field ##F## and ##a\in F##:

(i) the map ##af:S\to F## is defined to be the map that takes ##s\in S## to ##af(s)\in F##; and
(ii) the map ##(f_1+f_2):S\to F## is defined to be the map that takes ##s\in S## to ##f_1(s)+f_2(s)\in F##

Andrew,

Just revising your helpful posts trying to understand all you wrote ...

In the above post you write:

"... ... Hence, since ##V\times W## has infinite cardinality, so does the corresponding basis of ##Z##. So ##Z## is infinite-dimensional.

BUT ... ##\mathbb{F}## may be a finite field ... isn't it then possible that ##V \times W## is finite?

Can you clarify?

Peter
 
  • #13
andrewkirk said:
@Math Amateur just a quick comment. I committed the crime of unacknowledged identification in my post above, identifying elements of ##Z## with corresponding elements of its base set. When I wrote:

I should have written##Z## is uncountably infinite because every vector in the Cartesian product of the ##m## spaces ##V_k## is a separate basis element, and there are uncountably many such vectors. ##\vec v## is a different basis vector from ##2\vec v## and ##u+v## is independent from both ##u## and ##v##. It's because the sums we are making are formal sums, not vector sums.

Gotta run now. Will look at your other questions later.

Andrew,

Still thinking and working on and around your very helpful posts ... ...

Could you help me clarify the issue below ... ...The basis vectors for ##Z## are of the form ##\chi_x## where ##x \in V \times W## ... ... ... now ##(2v, w)## is different from ##(v, w)## and so we get two basis vectors of ##Z## from these two elements of ##V \times W## ... ... but I do not see how ##u + v## becomes a basis element (I am taking this to be what you are saying) ... ... it does not belong to V \times W ... ...

Can you clarify/explain ... ...

Peter
 
  • #14
Math Amateur said:
now ##(2v, w)## is different from ##(v, w)## and so we get two basis vectors of ##Z## from these two elements of ##V \times W## ... ... but I do not see how ##u + v## becomes a basis element (I am taking this to be what you are saying) ... ... it does not belong to V \times W ... ...

Can you clarify/explain ... ...
What I had intended to convey (but did not do so as precisely as could have been) was that if ##u,v\in V## and ##w\in W## then ##(u+v,w), (u,w)## and ##(v,w)## are all distinct elements of ##V\times W## and hence they correspond to three different basis vectors of ##Z##, which are ##\chi_{(u+v,w)},\chi_{(v,w)}## and ##\chi_{(v,w)}##. In particular, we have

$$\chi_{(u+v,w)}\neq \chi_{(v,w)}+\chi_{(v,w)}$$

just as we have

$$\chi_{(\lambda v,w)}\neq \lambda\ \chi_{(v,w)}$$

in the scalar multiplication case (here ##\lambda\in F##).

I hope that makes more sense than my last post.
 
  • Like
Likes Math Amateur
  • #15
Math Amateur said:
In the above post you write:

"... ... Hence, since ##V\times W## has infinite cardinality, so does the corresponding basis of ##Z##. So ##Z## is infinite-dimensional.

BUT ... ##\mathbb{F}## may be a finite field ... isn't it then possible that ##V \times W## is finite?

Can you clarify?
You are correct. I haven't had anything to do with vector spaces over finite fields (like ##\mathbb{Z}_p## for ##p## prime), so I forget about them and need to be reminded. A ##n##-dimensional vector space over a field ##F## has cardinality ##|F|^n##, since it is isomorphic to ##F^n##. Hence a finite-dimensional vector space over a finite field has finite cardinality, and so does the Cartesian product of a finite sequence of such spaces.

The point I was trying to make was that the space ##Z## is, in a certain sense, much bigger than either of the vector spaces that are 'multiplied' to make it, and also much bigger than the tensor space that we end up with when we take the quotient.

Say ##V## and ##W## are respectively ##m## and ##n## dimensional over field ##F##. Then
$$|V|=|F|^m,\ \ |W|=|F|^n,\ \ |V\times W|=|F|^{m+n}$$
which is all very reasonable, but then we have
$$|Z|=|F|^{|V\times W|}=|F|^{\left(|F|^{m+n}\right)}$$
which seems just a little excessive. Fortunately it is brought back to reality when we take the quotient, to get the tensor space having a cardinality of
$$|V\otimes W|=|F|^{mn}$$
 
  • Like
Likes Math Amateur

1. What is a Tensor Product?

A Tensor Product is a mathematical operation that combines two or more mathematical objects to create a new object with specific properties. In the context of linear algebra, a tensor product is a way of combining two vector spaces to create a new, larger vector space.

2. What is the Basis of a Tensor Product?

The Basis of a Tensor Product is a set of vectors that span the entire tensor product space. These vectors are created by taking the tensor product of individual basis vectors from the original vector spaces. They serve as a basis for all possible combinations of the original basis vectors.

3. How is the Basis of a Tensor Product determined?

The Basis of a Tensor Product can be determined by taking the tensor product of the individual basis vectors from the original vector spaces. The resulting set of vectors will span the entire tensor product space and serve as a basis for all possible combinations.

4. What is the significance of Cooperstein's Theorem 10.2?

Cooperstein's Theorem 10.2 states that for any two vector spaces V and W, the dimension of their tensor product space is the product of their individual dimensions. This theorem is important because it provides a way to calculate the dimension of a tensor product space without having to explicitly construct its basis.

5. How is Cooperstein's Theorem 10.2 used in scientific research?

Cooperstein's Theorem 10.2 has various applications in scientific research, particularly in fields such as physics and engineering. It is commonly used to analyze and model complex systems that involve multiple vector spaces, such as quantum mechanics and electromagnetism. The theorem allows researchers to easily determine the dimension of the tensor product space, which can then be used to make predictions and solve equations in these fields.

Similar threads

  • Linear and Abstract Algebra
Replies
1
Views
974
  • Linear and Abstract Algebra
Replies
6
Views
973
  • Linear and Abstract Algebra
Replies
1
Views
816
Replies
5
Views
3K
  • Linear and Abstract Algebra
Replies
5
Views
2K
  • Linear and Abstract Algebra
Replies
1
Views
1K
  • Linear and Abstract Algebra
Replies
2
Views
897
  • Linear and Abstract Algebra
Replies
4
Views
1K
Replies
4
Views
1K
  • Linear and Abstract Algebra
Replies
1
Views
1K
Back
Top