Understanding tensor product and direct sum

In summary, the conversation discusses the concepts of tensor product and direct sum, as well as how they apply to vectors and matrices. The example of forming a vector using basis elements is given, and the relation between total angular momentum in quantum mechanics is also mentioned. The conversation also addresses the two particle case and how it relates to the tensor product of Hilbert spaces. Further explanations and resources are suggested for a better understanding of these concepts.
  • #1
dwd40physics
32
0
TL;DR Summary
Understanding tensor product and direct sum of vector spaces in general and applied to total angular momentum.
Hi, I'm struggling with understanding the idea of tensor product and direct sum beyond the very basics. I know that direct sum of 2 vectors basically stacks one on top of another - I don't understand more than this . For tensor product I know that for a product of 2 matrices A and B the tensor product essential means each element in A gets multiplied by the matrix B.

What I don't understand in more formalism what the direct sum and tensor product do for: 1. vectors 2. matrices.

For example for the direct sum of e1 = (1,0) and f2 =(0,1,0) would be (1,0|0,1,0) -
how would we form (1,0,0,0,0) from the basis elements e1 and f2 ? - how does this make sense ?

In QM we covered the idea of total angular momentum where we saw that the total angular momentum of two angular momenta
J1and J2 is: JT = 1xJ2 + J1x1 - where x is a tensor product.

How do I make sense of this relation ?
 
Physics news on Phys.org
  • #2
dwd40physics said:
For example for the direct sum of e1 = (1,0) and f2 =(0,1,0) would be (1,0|0,1,0) -
how would we form (1,0,0,0,0) from the basis elements e1 and f2 ? - how does this make sense ?
$$(1,0,0,0,0)= 1\times e_2\oplus 0 \times f_2= 1\times (1,0)\oplus 0 \times(0,1,0)$$
where ##\oplus## is the direct sum operator and we use ##\times## here to denote left-multiplication of the vector by a scalar.
dwd40physics said:
In QM we covered the idea of total angular momentum where we saw that the total angular momentum of two angular momenta
J1and J2 is: JT = 1xJ2 + J1x1 - where x is a tensor product.

How do I make sense of this relation ?
For this one you'll need to provide more context and set out the formulas more precisely - perhaps with a picture from your text. We can't work out from this what was intended.
 
  • Like
Likes topsquark
  • #3
Do you understand why going from ##\phi(x)## for a single particle to ##\phi(x_1,x_2)## for two particles (with ##x,x_1,x_2\in\mathbb R^3##) translates to a tensor product of the Hilbert space with itself (for two particles)? If yes, then a simple way to translate this understanding to spin is to replace ##x,x_1,x_2\in\mathbb R^3## by ##\sigma,\sigma_1,\sigma_2\in\{\uparrow,\downarrow\}##. So ##\phi(\sigma)## are simply the two complex numbers ##\phi(\uparrow)## and ##\phi(\downarrow)##, and ##\phi(\sigma_1,\sigma_2)## then naturally gives you four complex numbers. And if you have ##n##-spin qubits, then by the same logic you get ##2^n## complex numbers.
 
  • Like
Likes vanhees71 and topsquark
  • #4
I don not understand the two particle case - do you have some recommended reading for where I can go to find out more. Thanks
 
  • #5
dwd40physics said:
I don not understand the two particle case - do you have some recommended reading for where I can go to find out more. Thanks
Have you tried these:

https://en.wikipedia.org/wiki/Tensor_product

https://www.math3ma.com/blog/the-tensor-product-demystified

In terms of the physics: if we have a system of two spin ##\frac 1 2## particles, then we find (experimentally, say) that the following measurement outcomes are possible:

##S^2 = 0, \ S_z = 0##
##S^2 = 1, \ S_z = 0 \ \text{or} \ \pm 1##

And this is what you get if you use the tensor product of two spin 1/2 Hilbert spaces and calculate the eigenvalues etc.
 
  • Like
Likes topsquark
  • #6
dwd40physics said:
I don not understand the two particle case - do you have some recommended reading for where I can go to find out more. Thanks
If you tell me what exactly you don't understand about the two particle case, then maybe I could suggest something specific.

Are you familiar with the (quantum) 1D harmonic oscillator? Can you see how a 2D harmonic oscillator corresponds to two 1D harmonic oscillator? (And a 3D to three 1D...?)

Or if you have two events, whose indiviual probabilities would be given by ##p_1(e_1)## and ##p_2(e_2)##. Do you understand why their joint probability is given by a function ##p_{12}(e_1,e_2)##? (Of course, if they are independent, then you have simply ##p_{12}(e_1,e_2)=p_1(e_1)p_2(e_2)##. But in general ##p_1(e_1)=\sum_{e_2}p_{12}(e_1,e_2)## and ##p_2(e_2)=\sum_{e_1}p_{12}(e_1,e_2)## is all you can say.)
 
  • Like
Likes topsquark
  • #7
andrewkirk said:
$$(1,0,0,0,0)= 1\times e_2\oplus 0 \times f_2= 1\times (1,0)\oplus 0 \times(0,1,0)$$
where ##\oplus## is the direct sum operator and we use ##\times## here to denote left-multiplication of the vector by a scalar.

For this one you'll need to provide more context and set out the formulas more precisely - perhaps with a picture from your text. We can't work out from this what was intended.
Here's a screenshot - don't understand going from 3.322 to 3.323. (Sakurai)
1675949758946.png
 
  • #8
gentzen said:
If you tell me what exactly you don't understand about the two particle case, then maybe I could suggest something specific.

Are you familiar with the (quantum) 1D harmonic oscillator? Can you see how a 2D harmonic oscillator corresponds to two 1D harmonic oscillator? (And a 3D to three 1D...?)

Or if you have two events, whose indiviual probabilities would be given by ##p_1(e_1)## and ##p_2(e_2)##. Do you understand why their joint probability is given by a function ##p_{12}(e_1,e_2)##? (Of course, if they are independent, then you have simply ##p_{12}(e_1,e_2)=p_1(e_1)p_2(e_2)##. But in general ##p_1(e_1)=\sum_{e_2}p_{12}(e_1,e_2)## and ##p_2(e_2)=\sum_{e_1}p_{12}(e_1,e_2)## is all you can say.)
I have come across the basics of two particle systems e.g. parity operator exchanging particles 1 and 2 but not much more depth than that not the tensor product
 
  • #9
andrewkirk said:
$$(1,0,0,0,0)= 1\times e_2\oplus 0 \times f_2= 1\times (1,0)\oplus 0 \times(0,1,0)$$
where ##\oplus## is the direct sum operator and we use ##\times## here to denote left-multiplication of the vector by a scalar.
I don't understand how you get this ? What happens to the basis of R^2 and R^3 when you direct sum, how do we construct the basis of R^5 from R^3 and R^2 ?
 
  • #10
dwd40physics said:
Here's a screenshot - don't understand going from 3.322 to 3.323. (Sakurai)
View attachment 321987
That's another example of what I tried to communicate above. We know that there is some sort of "addition" of angular momentum, but the experimental results (if nothing else) lead us to use the tensor product of Hilbert spaces, kets and operators; rather than the simple direct sum.

Sakurai doesn't give a justification in that particular extract you quote, but he may do elsewhere.
 
  • Like
Likes dwd40physics
  • #11
dwd40physics said:
Here's a screenshot - don't understand going from 3.322 to 3.323. (Sakurai)
With my suggestion, you would read ##\mathbf{J = L + S}## as ##\mathbf{J}(\sigma_1,\sigma_2,\sigma'_1,\sigma'_2) = (\mathbf{L + S})(\sigma_1,\sigma_2,\sigma'_1,\sigma'_2)## and then translate ##\mathbf{J} = \mathbf{L}\otimes 1 + 1 \otimes \mathbf{S}## as ##\mathbf{J}(\sigma_1,\sigma_2,\sigma'_1,\sigma'_2) = \mathbf{L}(\sigma_1,\sigma'_1)\cdot \delta_{\sigma_2,\sigma'_2} +\delta_{\sigma_1,\sigma'_1} \cdot \mathbf{S}(\sigma_2,\sigma'_2)##. (##\delta## here is a simple Kronecker delta.) If you want, you can even "define" ##1(\sigma_1,\sigma'_1):=\delta_{\sigma_1,\sigma'_1}## and ##1(\sigma_2,\sigma'_2):=\delta_{\sigma_2,\sigma'_2}##, and then write it as ##\mathbf{J}(\sigma_1,\sigma_2,\sigma'_1,\sigma'_2) = \mathbf{L}(\sigma_1,\sigma'_1)\cdot 1(\sigma_2,\sigma'_2) + 1(\sigma_1,\sigma'_1) \cdot \mathbf{S}(\sigma_2,\sigma'_2)##.

Of course, you also have to specify how such an operator (~"matrix") transforms a wavefunction (~"vector"). Say for example as ##(\mathbf{J}\phi)(\sigma_1,\sigma_2):=\sum_{\sigma'_1,\sigma'_2}\mathbf{J}(\sigma_1,\sigma_2,\sigma'_1,\sigma'_2)\phi(\sigma'_1,\sigma'_2)##.
 
  • #12
dwd40physics said:
I don't understand how you get this ? What happens to the basis of R^2 and R^3 when you direct sum, how do we construct the basis of R^5 from R^3 and R^2 ?
For any two vector spaces ##V_a, V_b## with corresponding bases ##B_a,B_b## and zero vectors ##\vec 0_a, \vec 0_b##, the set of vectors
$$\{\vec u \oplus \vec 0_b\ |\ \vec u\in V_a\} \cup
\{\vec 0_a \oplus \vec v \ |\ \vec v\in V_b\} $$forms a basis for ##V_a\oplus V_b##.

In our case, with ##V_a=\mathbb R^3, V_b=\mathbb R^2##, we choose bases ##B_a = \{(1,0,0), (0,1,0), (0,0,1)\}, B_b=\{(1,0), (0,1)\}##
Then a basis for ##\mathbb R^5 = \mathbb R^3\oplus\mathbb R^2## is
$$\{b\oplus (0,0)\ |\ b\in B_3\}\cup
\{(0,0,0)\oplus b \ |\ b\in B_2\}$$which has elements
$$(1,0,0,0,0), (0,1,0,0,0), (0,0,1,0,0), (0,0,0,1,0), (0,0,0,0,1)$$with the first three coming from the first of the two sets in the union and the last two coming from the second of the two sets.
 

1. What is the difference between tensor product and direct sum?

The tensor product is a mathematical operation that combines two vector spaces to create a new vector space. It is denoted by the symbol ⊗ and is used to represent the outer product of two vectors. On the other hand, direct sum is a way of combining vector spaces by creating a new vector space that contains all the elements of the original vector spaces. It is denoted by the symbol ⊕ and is used to represent the direct sum of two vectors.

2. How are tensor product and direct sum related?

The tensor product and direct sum are both ways of combining vector spaces. However, they differ in their properties and the resulting vector spaces. The tensor product creates a new vector space that is larger than the original vector spaces, while the direct sum creates a new vector space that is the direct sum of the original vector spaces.

3. What are the applications of tensor product and direct sum in science?

The tensor product and direct sum are used in various fields of science, such as physics, engineering, and computer science. In physics, they are used to describe the properties of quantum systems. In engineering, they are used to model complex systems. In computer science, they are used in machine learning and data analysis.

4. How do you calculate the tensor product and direct sum of two vectors?

The tensor product of two vectors can be calculated by taking the outer product of the two vectors. The direct sum of two vectors can be calculated by adding the two vectors together. In both cases, the resulting vector space will have a larger dimension than the original vector spaces.

5. Can tensor product and direct sum be applied to more than two vector spaces?

Yes, the tensor product and direct sum can be applied to any number of vector spaces. For example, the tensor product of three vector spaces would be denoted by V1 ⊗ V2 ⊗ V3, and the direct sum of three vector spaces would be denoted by V1 ⊕ V2 ⊕ V3. The resulting vector space will have a higher dimension than the original vector spaces.

Similar threads

  • Quantum Physics
Replies
2
Views
1K
  • Linear and Abstract Algebra
Replies
7
Views
234
Replies
2
Views
1K
Replies
1
Views
1K
Replies
14
Views
1K
Replies
10
Views
718
  • Special and General Relativity
Replies
3
Views
2K
  • Quantum Physics
Replies
5
Views
5K
  • Special and General Relativity
Replies
21
Views
2K
Replies
1
Views
1K
Back
Top