Example of an algebra tensor product

In summary: So, each polynomial in ##P[X,Y]## can be written as a finite sum of polynomials in ##X## times polynomials in ##Y##--as you wrote. (I was thinking in terms of the dual space! as usual!) And, yes, ##P[X]## and ##P[Y]## are subalgebras of ##P[X,Y]##. But the multiplication in ##P[X,Y]## is not simply multiplying the indeterminates as it is in the Cartesian product. It is the multiplication of polynomials. So, in this case, the tensor product is not the Cartesian product. It is the polynomial ##p(X)q(Y)##.
  • #1
Geofleur
Science Advisor
Gold Member
426
177
On pages 67 & 68 of Hassani's mathematical physics book, he gives the following definition:

"Let ## \mathcal{A} ## and ## \mathcal{B} ## be algebras. The the vector space tensor product ## \mathcal{A} \otimes \mathcal{B} ## becomes an algebra tensor product if we define the product

## (\mathbf{a}_1 \otimes \mathbf{b}_1)(\mathbf{a}_2\otimes\mathbf{b_2})=\mathbf{a}_1\mathbf{a}_2\otimes\mathbf{b}_1\mathbf{b}_2 ##

on ## \mathcal{A}\otimes\mathcal{B} ##."

He goes on to say that, because the spaces ##\mathcal{A}\otimes\mathcal{B} ## and ## \mathcal{B}\otimes\mathcal{A} ## are isomorphic, we require that ## \mathbf{a}\otimes\mathbf{b} = \mathbf{b}\otimes\mathbf{a} ## for all ## \mathbf{a}\in\mathcal{A} ## and ## \mathbf{b}\in\mathcal{B}##.

So far, so good. But then he goes on to say that this last requirement is important when an algebra ## \mathcal{A} ## is written as the tensor product of two of its subalgebras ##\mathcal{B} ## and ##\mathcal{C}##; also, that ## \otimes ## in such a case is identified with the multiplication in ## \mathcal{A}##.

I have been trying for a week now to come up with an example to help me make sense of these remarks. Firstly, does anyone know of an example of an algebra that can be written as the tensor product of two of its own subalgebras? Secondly, I have always thought that ## \mathbf{a}\otimes\mathbf{b} ## is just shorthand for an element, ## (|a\rangle, |b\rangle) ##, of the Cartesian product of the underlying vector spaces. So how can ## \otimes ## be the same as the multiplication in ## \mathcal{A} ##?
 
Physics news on Phys.org
  • #2
That's not an easy task. The tensor product is kind of a universal construction. (Please don't urge me to prove it in the strict homological sense. I think it is true even there but I'm not sure.) To make it useful one usually factor something out. Even your example is strictly speaking a quotient: ##\mathcal{A} ⊗ \mathcal{B} / <a⊗b - b⊗a >##. So what comes to my mind to answer your first question is a trivial example. The tensor product of two vectors span, or better the span of all tensor products of two vectors, result in the whole matrix algebra and they also can be embedded in it. The tensor product of just two vectors can be seen as a matrix multiplication of a row vector with a column vector or vice versa. (I remember we had an example in a previous thread where the quotient represented Pauli's exclusion principle. I just forgot whether it's been a tensor algebra or a Graßmann algebra to which similar properties with respect to universality hold.)
To your second question: a tensor product is bilinear (matrix multiplication) whereas the Cartesian product is per definition just a concatenation of components, so, yes, there is a difference.
The tensor product ⊗ can be the multiplication in an algebra, e.g. if there is no other multiplication, either defined or simply not considered. In the case ##\mathcal{A} = \mathcal{B} ⊗ \mathcal{C}## for subalgebras of ##\mathcal{A}## it looks natural to consider the multiplication defined by the tensor product.

I'm sure you wanted to here something more exciting. But excitement comes through quotients or subalgebras.
 
  • Like
Likes Geofleur
  • #3
Let ##P[X,Y]## be the ring of polynomials in the commuting indeterminates,##X## and ##Y##, over a commutative ring, ##R## (with unit). ##P[X,Y]## is an ##R## algebra under polynomial multiplication. The R-algebras, ##P[X]## and ##P[Y]## are subalgebras.

Let ##H:P[X]⊗P[Y] \rightarrow P[X,Y]## be the R-algebra homomorphism, ##H(p(X)⊗q(Y) ) = p(X)q(Y)##. ##H## is clearly injective. ##H## is onto since every polynomial in ##P[X,Y]## can be written as a product of the form, ##p(X)q(Y)##.

In the Cartesian product of vector spaces, ##(rV,W)## and ##(V,rW)## are different. But ##(rV)⊗_{R}W ## and ##V⊗_{R}rW ## are the same.
 
Last edited:
  • Like
Likes fresh_42 and Geofleur
  • #4
@lavinia: Let's see if I understand correctly what you said. If we take the commutative ring, ## R ##, to be ## \mathbb{Z} ##, then ## P[X,Y] ## has elements such as ## 1 + 2XY + X^2 + 5Y^3 = 1 + 2YX + X^2 + 5Y^3 ##. If this is right, then it's clear to me that ## P[X] ## and ## P[Y] ## are subalgebras of ## P[X,Y] ##. When you say that every polynomial in ## P[X,Y] ## can be written as a product ## p(X)q(Y) ##, does that mean, e.g., that ## p(X,Y) = X^2 + 2XY + Y^3 ## can be written as a product of a polynomial in ## X ## times one in ## Y ##? Or were you just saying that each individual term, such as ## 2XY ##, can be written in this way?

I am not sure I know what the ## rV ## in ## (rV,W) ## means. If the tensor product in the polynomial example just amounts to multiplication of polynomials in the commuting indeterminates ## X ## and ## Y ##, I think I see how the order in the tensor product would not matter. The homomorphism ## H ## converts the cartesian products into multiplications, so that reversing the order in the product has no effect on the result.
 
  • #5
lavinia said:
##H## is onto since every polynomial in ##P[X,Y]## can be written as a product of the form, ##p(X)q(Y)##.
I don't understand this either. Maybe it should be "every polynomial in ##P[X,Y]## can be written as a sum of products of the form, ##p(X)q(Y)##?
 
  • #6
Geofleur said:
@lavinia: Let's see if I understand correctly what you said. If we take the commutative ring, ## R ##, to be ## \mathbb{Z} ##, then ## P[X,Y] ## has elements such as ## 1 + 2XY + X^2 + 5Y^3 = 1 + 2YX + X^2 + 5Y^3 ##. If this is right, then it's clear to me that ## P[X] ## and ## P[Y] ## are subalgebras of ## P[X,Y] ##. When you say that every polynomial in ## P[X,Y] ## can be written as a product ## p(X)q(Y) ##, does that mean, e.g., that ## p(X,Y) = X^2 + 2XY + Y^3 ## can be written as a product of a polynomial in ## X ## times one in ## Y ##? Or were you just saying that each individual term, such as ## 2XY ##, can be written in this way?

You are right What I said is wrong. What is correct is that every polynomial can be written as a sum, ##p[X,Y] =Σr_{m,n}(X^m)Y^n## So ##H(Σr_{m,n}(X^m)⊗Y^n) = p##
 
  • #7
Samy_A said:
I don't understand this either. Maybe it should be "every polynomial in ##P[X,Y]## can be written as a sum of products of the form, ##p(X)q(Y)##?
yes. My error.
 
  • #8
Geofleur said:
@lavinia: I am not sure I know what the ## rV ## in ## (rV,W) ## means. If the tensor product in the polynomial example just amounts to multiplication of polynomials in the commuting indeterminates ## X ## and ## Y ##, I think I see how the order in the tensor product would not matter. The homomorphism ## H ## converts the cartesian products into multiplications, so that reversing the order in the product has no effect on the result.

## (rV,W) ## and ## (V,rW) ## are not in the tensor product but in the Cartesian product. I was trying to illustrate the difference between the Cartesian product and the tensor product. ##r## here is an element of the ring,##R##.
 
  • #9
lavinia said:
(rV,W) (rV,W) and (V,rW) (V,rW) are not in the tensor product but in the Cartesian product. I was trying to illustrate the difference between the Cartesian product and the tensor product. rr here is an element of the ring,RR.

Ah - I understand!
 
  • #10
lavinia said:
## (rV,W) ## and ## (V,rW) ## are not in the tensor product but in the Cartesian product. I was trying to illustrate the difference between the Cartesian product and the tensor product. ##r## here is an element of the ring,##R##.
This is especially important when it comes to coordinates. It means that, e.g. for multiples of ##1##, the center of the matrix algebra, you cannot say where this scalar factor belongs to. It can freely move from one to the other or "outside". In a Cartesian product you have to decide it.
 
  • #11
In algebraic geometry one sets up an equivalence of categories between affine algebraic varieties and certain commutative rings, or between abstract affine schemes and all commutative rings. This correspondence is arrow reversing, i.e. if X, Y are the varieties whose rings of functions are R and S respectively, then a geometric map X-->Y corresponds to a ring map S-->R, by pulling back polynomial functions on Y to polynomial functions on X.

Notice the universal property of a tensor product RtensS is that a ring map out of RtensS is determined uniquely by a pair of ring maps, one out of R and one out of S. Reversing the arrows, if Z is the variety corresponding to the tensor product ring, we see the corresponding geometric maps determine a unique map into Z for every pair of maps, one into X and one into Y. Does that sound familiar? That is the mapping property of a geometric product space. Thus Z must be the product space Z ≈ XxY.

I.e. the tensor product of the coordinate rings of two affine algebraic varieties is the coordinate ring of the product of those varieties. Since the coordinate ring of affine n space is the polynomial ring in n variables, and the product of the affine spaces k^n and k^m is k^(n+m), a special case of this is the fact that the tensor product of polynomial rings in n and m variables, is just the polynomial ring in n+m variables.

So to answer the request for insight into when an algebra A is the tensor product of two subalgebras B and C, this occurs in the commutative case precisely when the affine variety determined by A is the geometric product of two of its affine subvarieties.

here is a writeup of a related exercise in hartshorne:

http://sierra.nmsu.edu/morandi/oldwebpages/Math683Fall2013/Product.pdfthe following notes are more detailed, see especially pages 12-15:

https://www.math.upenn.edu/~siegelch/Notes/ag.pdf

note further that if A and B are algebras over a 3rd ring C, then the tensor product over C is a "fibered product", i.e. a product in which all varieties concerned are equipped with a map into the variety of C, and we taking a product of these maps. In the elementary case we assumed the ring C is a field and hence its variety is a point, so nothing more is obtained from this additional structure.
 
Last edited:
  • Like
Likes Geofleur and lavinia
  • #12
Your post is forcing me to learn some algebraic geometry, which I view as a good thing! I will see if I can come up with a good concrete example that illustrates what you said :-)

EDIT: I'm going to have to work harder than I initially thought, but I'm determined :mad:
 
Last edited:
  • #13
OK, I ordered a copy of Miles Reid's book. Maybe I'll just start a new thread once I think I've got something - this is going to take a while.
 
  • #14
Miles Reid's Undergraduate algebraic geometry is a good book, but will not give you this categorical notion of affine products in relation to tensor products. but the first 15 pages of the second link I gave above will, the notes from Charles Siegel at UPenn, based on the terse little book Algebraic varieties, by George Kempf.
https://www.amazon.com/dp/0521426138/?tag=pfamazon01-20

I first learned it from Mumford's "red book" of algebraic varieties.

https://www.amazon.com/dp/354063293X/?tag=pfamazon01-20

heres a cheaper used copy, but maybe without the nice little addition of his lectures on curves from Michigan:

http://www.abebooks.com/servlet/SearchResults?an=david+mumford&sts=t&tn=red+book
 
  • Like
Likes Geofleur
  • #15
mathwonk said:
Miles Reid's Undergraduate algebraic geometry is a good book, but will not give you this categorical notion of affine products in relation to tensor products. but the first 15 pages of the second link I gave above will, the notes from Charles Siegel at UPenn, based on the terse little book Algebraic varieties, by George Kempf.
https://www.amazon.com/Algebraic-Varieties-Mathematical-Society-Lecture/dp/0521426138/ref=sr_1_1?s=books&ie=UTF8&qid=1449594305&sr=1-1&keywords=george+kempf,+algebraic+varieties

I first learned it from Mumford's "red book" of algebraic varieties.

https://www.amazon.com/Red-Book-Varieties-Schemes-Mathematics/dp/354063293X/ref=sr_1_1?s=books&ie=UTF8&qid=1449594259&sr=1-1&keywords=mumford's+red+book

heres a cheaper used copy, but maybe without the nice little addition of his lectures on curves from Michigan:

http://www.abebooks.com/servlet/SearchResults?an=david+mumford&sts=t&tn=red+book
Every time I have a look in mine I find it rather "categorial" and more like a textbook on local rings. Beside Strassen's work on algorithmic manifolds I've never seen something of an application to it. Can you give me some hints where this theory is of any use?
 
  • #16
well what i meant by not categorical was that miles reid treats varieties and products as embedded, not in terms of their mapping properties as mumford does. miles' book is a mixture of abstracta nd concrete approaches as he says in his "woffle", and he does discuss the category of affine varieties, e.g. he discusses the static relation ship between subvarieties of affine space and ideals of polynomial rings early on, and on page 69 he gives the dynamic correspondence between maps in the two categories, varieties and algebras. But since he does not assume a knowledge of tensor products on the part of the reader, he cannot give the categorical treatment of products using them.

As to what algebraic geometry is useful for, it depends who you are and what uses interest you. Reid refers to applications in number theory, computer algebra, string theory, and so on, and I for instance just used it to answer the OP's question about structure of tensor product algebras. For instance when I lectured on Jacobian varieties at the International Centre for Theoretical Physics in 1987 I understood the physicists there intended to use what they learned in string theory. The first use of varieties Reid refers to in his woffle that caught my eye, is at the bottom of page 4, line -5, where he says we can have quite a lot of fun with them. I am not familiar with Strassen's use of it in algorithmic manifolds, if that is the area of interest to you. Is this the algorithm for fast matrix multiplication? or an algorithm for classifying topological manifolds? In my experience, a subject that one is aware of can be found a use for, but not ones of which one is ignorant. I at least have found that the case when someone else more sophisticated or better educated than me used something unfamiliar to me, to solve a problem I was working on unsuccessfully. I.e. as a general rule, one is advised to learn a bit of topics generally thought to be important even before knowing how one will apply them, at least if one wants to be early in the hunt for new results. But the main use for me personally is to understand nature and enjoy the intellectual process.

At bottom, algebraic geometry teaches one how to apply geometric intuition to algebra (as in the answer to the OP's question), and in the other direction, how to use algebra to make geometric intuition precise.

If you are interested in codes, this paper may interest you:
http://www.win.tue.nl/~ruudp/paper/14.pdf
 
Last edited:
  • #17
mathwonk said:
Is this the algorithm for fast matrix multiplication?
As far as I remember he used it to improve the ω in the runtime estimation of it ##(O(n^ω))##. Instead of searching single algorithms (as within a closed subset) he considered their generic (open sets) behavior. But it's too long ago that I remember details. Mainly I remember my astonishment how he managed to get something very concrete out a concept which in the beginning looked simply pure mathematical and abstract.

In my experience, a subject that one is aware of can be found a use for, but not ones of which one is ignorant.
A bracing optimistic point of view. E.g. I once found a funny invariant on Lie Algebras, unfortunately ##\{0\}## for the semisimple ones but I'm not bright enough to make something out of it. (Ignore that please, if I already have breached the rules with that remark.)

I.e. as a general rule, one is advised to learn a bit of topics generally thought to be important even before knowing how one will apply them, at least if one wants to be early in the hunt for new results. But the main use for me personally is to understand nature and enjoy the intellectual process.
I totally agree on. The reason for my question has been driven by the desire to narrow the gap between those coordinate rings and physical coordinate systems. My textbook doesn't supply many examples, mostly of the kind ##V(x^2 - y^3)## in ##ℂ^2## or derivations as tangent spaces.
 
  • #18
I find it quite hard to extract much gometry from commutative ring theory myself, and am still not at all expert on that topic, in spite of being a functioning (retired) algebraic geometer. I got most of my mileage out of analytic and purely geometric tools, but kept trying to fathom the abstract algebra. The first book that taught algebraic geometry with a good grip of the geometry was for me Shafarevich, Basic Algebraic Geometry, which I still like. It's not all commutative algebra and sheaf theory, but has lots of geometric insight and examples. I recommend taking a look there.

When I teach it I start by giving my class a feel for the topology of a complex plane curve in terms of its algebraic equation. the basic fact is the genus of the compact surface underlying a complex projective plane curve of degree d, equals (1/2)(d-1)(d-2), which of course is the triangular number 1+2+3+...+(d-2).

The reason for this is that genus of a surface is constant under deformation, so we deform our curve into a union of d general lines, which meet in such a way as to form that number of "holes", q.e.d. i.e. each "line" is homeomorphic to a sphere, and so we have a certain number of spheres meeting each other, which can be smoothed into a smooth surface of the appropriate genus, and whicha rose conversely from such a smooth surface by pinching off a certain number of loops in the surface. I have some notes in pdf form I could email if desired. or one could consult the beautiful and extremely careful and detailed book Algebraic Plane Curves, by Brieskorn and Knorrer. Another lovely book is Riemann surfaces and algebraic curves by Rick Miranda. On a special important topic, there are notes on my webpage on the classical, and some more modern, versions of the Riemann Roch theorem:
http://alpha.math.uga.edu/~roy/rrt.pdf

another good place to learn to combine polynomial aklgebra and geometry is the theory of toric varieties, discussed in a nice book by William Fulton, and in a somewhat scarce and now rather expensive book by Oda, Convex Bodies and algebraic geometry. (One used book seller in Maine, who I think ought to be ashamed, is asking over $550 for a copy of this book.) So one should probably only consult this last work in a library.
 
Last edited:

1. What is an algebra tensor product?

An algebra tensor product is a mathematical operation that combines two or more algebraic structures to create a new structure. It is often used in linear algebra and abstract algebra to describe the relationship between two vector spaces or groups.

2. How is an algebra tensor product calculated?

The algebra tensor product is calculated by multiplying the elements of the first algebraic structure with the elements of the second structure. The result is a new structure that encompasses all possible combinations of the original structures.

3. What are some examples of an algebra tensor product?

One example of an algebra tensor product is the Kronecker product, which combines two matrices to create a larger matrix. Another example is the tensor product of two vector spaces, which creates a new vector space with elements that are combinations of the original vector spaces.

4. What makes the algebra tensor product useful in scientific research?

The algebra tensor product is useful in scientific research because it provides a way to represent complex relationships between different structures. It allows scientists to analyze and understand the properties and behaviors of these structures in a more comprehensive way.

5. Are there any limitations to using an algebra tensor product?

Yes, there are limitations to using an algebra tensor product. It can only be applied to structures that are compatible with each other, and it may not always result in a meaningful or useful new structure. Additionally, the calculation of an algebra tensor product can be computationally intensive and may not be feasible for very large structures.

Similar threads

  • Linear and Abstract Algebra
Replies
7
Views
257
  • Linear and Abstract Algebra
Replies
2
Views
911
  • Linear and Abstract Algebra
Replies
2
Views
1K
  • Linear and Abstract Algebra
Replies
5
Views
3K
  • Linear and Abstract Algebra
Replies
1
Views
1K
  • Linear and Abstract Algebra
Replies
4
Views
2K
  • Linear and Abstract Algebra
Replies
7
Views
2K
  • Linear and Abstract Algebra
Replies
3
Views
2K
  • Linear and Abstract Algebra
Replies
4
Views
1K
  • Linear and Abstract Algebra
Replies
1
Views
820
Back
Top