Invariance under SU(2) in quantum mechanics

In summary, the conversation discusses the topic of tensor products in quantum mechanics, specifically in regards to spinors. The notes being referenced suggest that a certain vector is invariant under SU(2), but the person who is self-studying quantum mechanics is having trouble understanding why. They try to explicitly compute the value of the vector, but their calculations do not match the initial vector. With the help of someone else, they are able to understand the process and get the correct result. However, they still find it counter-intuitive that the representation of SU(2) must be applied first before performing the calculations on the resulting tensor products.
  • #1
gnieddu
24
1

Homework Statement


Hi,

I'm trying to self-study quantum mechanics, with a special interest for the group-theoretical aspect of it. I found in the internet some lecture notes from Professor Woit that I fouund interesting, so I decided to use them as my guide. Unfortunately I'm now stuck at a point which suggests I'm perhaps not as fluent with a couple of concepts - namely "invariance" and "group action" - as I thought. Hopefully someone will be able to help me and suggest the best way to progress.

It all started with a discussion about the tensor product of vector spaces, and the application of this concept to spinors. In order to give a practical example, the notes consider the tensor product of two spinors ##V^1 \otimes V^1##. Taking ##\begin{pmatrix}1 \\ 0\end{pmatrix}## and ##\begin{pmatrix}0 \\ 1\end{pmatrix}## as a basis for ##V^1##, a basis for the space resulting from the tensor product is easily found by taking all possible tensor products of the two basis vectors. So far, so good.

At this point, the notes make the following statement: "one can show that the vector:

$$\frac 1 {\sqrt 2}(\begin{pmatrix}1 \\ 0\end{pmatrix} \otimes \begin{pmatrix}0 \\ 1\end{pmatrix} - \begin{pmatrix}0 \\ 1\end{pmatrix} \otimes \begin{pmatrix}1 \\ 0\end{pmatrix}) \in V^1 \otimes V^1$$

is invariant under SU(2), by computing either the action of SU(2) or of its Lie algebra su(2)".

2. The attempt at a solution

Now, in order to understand all this, I first started to explicitly compute the value of the vector which, unless I'm wrong, is:

$$\frac 1 {\sqrt 2}(\begin{pmatrix}1 \\ 0\end{pmatrix} \otimes \begin{pmatrix}0 \\ 1\end{pmatrix} - \begin{pmatrix}0 \\ 1\end{pmatrix} \otimes \begin{pmatrix}1 \\ 0\end{pmatrix}) = \frac 1 {\sqrt 2}\begin{pmatrix}0 & 1 \\-1 & 0\end{pmatrix}$$

The next step, in my mind, was to take a generic element of SU(2), i.e. the matrix:

$$\begin{pmatrix}\alpha & \beta \\ -\bar \beta & \bar \alpha\end{pmatrix}$$

(with ##\alpha\bar\alpha + \beta\bar\beta = 1##), and verify that, by multiplying it with the vector results in the vector itself, i.e. to verify that:

$$\frac 1 {\sqrt 2}\begin{pmatrix}\alpha & \beta \\ -\bar \beta & \bar \alpha\end{pmatrix} \begin{pmatrix}0 & 1 \\-1 & 0\end{pmatrix} = \frac 1 {\sqrt 2}\begin{pmatrix}0 & 1 \\-1 & 0\end{pmatrix}$$

However, by doing the calculations I get:

$$\frac 1 {\sqrt 2}\begin{pmatrix}\alpha & \beta \\ -\bar \beta & \bar \alpha\end{pmatrix} \begin{pmatrix}0 & 1 \\-1 & 0\end{pmatrix} = \frac 1 {\sqrt 2}\begin{pmatrix}-\beta & \alpha \\-\bar\alpha & -\bar\beta\end{pmatrix}$$

whihch doesn't look like the initial vector. I also tried to see if conjugation would work, but:

$$\frac 1 {\sqrt 2}\begin{pmatrix}\alpha & \beta \\ -\bar \beta & \bar \alpha\end{pmatrix} \begin{pmatrix}0 & 1 \\-1 & 0\end{pmatrix}{\begin{pmatrix}\alpha & \beta \\ -\bar \beta & \bar \alpha\end{pmatrix}}^{-1}$$

doesn't seem to work as well. I'm clearly missing something here, but can't tell what.

Can anyone help me shed some light on this subject?

Thanks

Gianni
 
Physics news on Phys.org
  • #2
The definition of a representation ##\pi## of ##SU(2)## on ##V^1\otimes V^1## with ##V^1 = \mathbb{C}^2## is

##\pi_{V^1\otimes V^1} \left( \begin{bmatrix}\alpha & \beta \\ -\bar{\beta}&\bar{\alpha} \end{bmatrix} \right) \left( \begin{bmatrix}1\\0\end{bmatrix}\otimes \begin{bmatrix}0\\1\end{bmatrix} \right) ##

##=\pi_{V^1} \left( \begin{bmatrix}\alpha & \beta \\ -\bar{\beta}&\bar{\alpha}\end{bmatrix} \right) \left( \begin{bmatrix}1\\0\end{bmatrix} \right) \otimes \pi_{V^1} \left( \begin{bmatrix}\alpha & \beta \\ -\bar{\beta}&\bar{\alpha}\end{bmatrix} \right) \left( \begin{bmatrix}0\\1\end{bmatrix} \right)##

##= \begin{bmatrix}\alpha & \beta \\ -\bar{\beta}&\bar{\alpha} \end{bmatrix} \begin{bmatrix}1\\0 \end{bmatrix} \otimes \begin{bmatrix} \alpha & \beta \\ -\bar{\beta}&\bar{\alpha} \end{bmatrix} \begin{bmatrix} 0\\1\end{bmatrix} ##

##= \begin{bmatrix}\alpha \\ -\bar{\beta}\end{bmatrix} \otimes \begin{bmatrix}\beta \\ \bar{\alpha}\end{bmatrix}##

Now calculate the second term, put in the ##\frac{1}{\sqrt{2}}##, which finally cancel out, choose a way to arrange the coordinates for the tensors (don't confuse the orientation among the three times it has to be done), and remember that ##\alpha \cdot \bar{\alpha} + \beta \cdot \bar{\beta} = 1##.

(Reminder to myself: §9.4 p.118)
 
  • #3
Hi,

thanks for your reply, and apologies for my late response (I can only work on this in my spare time).

So, I followed your instructions, and calculated the second term, which should be:

$$\begin{pmatrix}\beta \\ \bar\alpha\end{pmatrix} \otimes \begin{pmatrix}\alpha \\ -\bar\beta\end{pmatrix}$$

At this point, their difference is:

$$\begin{pmatrix}\alpha \\ -\bar\beta\end{pmatrix} \otimes \begin{pmatrix}\beta \\ \bar\alpha\end{pmatrix} - \begin{pmatrix}\beta \\ \bar\alpha\end{pmatrix} \otimes \begin{pmatrix}\alpha \\ -\bar\beta\end{pmatrix} = \begin{pmatrix}\alpha\beta && \alpha\bar\alpha \\ -\beta\bar\beta && -\bar\alpha\bar\beta\end{pmatrix} - \begin{pmatrix}\alpha\beta && -\beta\bar\beta \\ \alpha\bar\alpha && -\bar\alpha\bar\beta\end{pmatrix} = \begin{pmatrix}0 && \alpha\bar\alpha+\beta\bar\beta \\ -\alpha\bar\alpha-\beta\bar\beta && 0\end{pmatrix} = \begin{pmatrix}0 && 1 \\ -1 && 0\end{pmatrix}$$

which, multiplied by the common factor ##\frac 1 {\sqrt 2}## gives the original vector. So I believe I've got through this part.

What I still find counter-intuitive is that the process you suggested, i.e. apply the SU(2) representation FIRST, and then performing the calculations on the resulting tensor products, should give a different result than reversing the order of operations. It looks like the representation must be "distributed" through the parts of the vector in order to get the correct result. Perhaps you can provide some comment on this?

Another part I didn't quite understand was you point about arranging the coordinates of the tensors (the notes I'm following have not gone in too much detail about tensors, I believe, so that may be a reason...). Could you clarify?

Thanks again for your help - it's been precious so far!

Gianni
 
  • #4
gnieddu said:
What I still find counter-intuitive is that the process you suggested, i.e. apply the SU(2) representation FIRST, and then performing the calculations on the resulting tensor products, should give a different result than reversing the order of operations. It looks like the representation must be "distributed" through the parts of the vector in order to get the correct result. Perhaps you can provide some comment on this?
First of all, I looked up which representation Woit has meant on the tensor product, and whether the identification of ##v_1\otimes v_2## with an element of ##SU(2)## or ##\mathfrak{su}(2)## which you obviously tried, has been meant by him. Fortunately, you gave me a hint which source you might have read.

##SU(2)## operates as matrix transformations on ##\mathbb{C}^2##, as group via conjugation on itself and last but not least as Lie group on its Lie algebra via the adjoint representation. They are all different, although the latter are related. Since you already ruled out conjugation on ##SU(2)## and ##\mathfrak{su}(2)## I checked the natural one. One has to be careful here, because ##\mathfrak{su}(2) \ncong \mathbb{C}^2 \otimes \mathbb{C}^2 \ncong SU(2)## and although there are mappings between them, they must not be confused. (I wrote a short Insight article about this, if you like to read it: https://www.physicsforums.com/insights/representations-precision-important/)
Another part I didn't quite understand was you point about arranging the coordinates of the tensors (the notes I'm following have not gone in too much detail about tensors, I believe, so that may be a reason...). Could you clarify?
As I did the calculations I ended up with ##\begin{bmatrix}0&-1\\1&0\end{bmatrix}## and I realized, that one can write
$$
\begin{bmatrix}a\\b\end{bmatrix}\otimes \begin{bmatrix}c\\d\end{bmatrix}=\begin{bmatrix}ac&ad\\bc&bd\end{bmatrix} \textrm{ or } \begin{bmatrix}a\\b\end{bmatrix}\otimes \begin{bmatrix}c\\d\end{bmatrix}=\begin{bmatrix}ac&bc\\ad&bd\end{bmatrix}
$$
which doesn't make a difference as long as one remains consistent.
 
  • #5
Hi gnieddu!

I think maybe the confusion here evaporates once you realize that the tensor product of two vectors is still a vector, and not a matrix. I would suggest writing
[tex]\begin{bmatrix}a\\b\end{bmatrix}\otimes \begin{bmatrix}c\\d\end{bmatrix}=\begin{bmatrix}ac\\ad\\bc\\bd\end{bmatrix}.[/tex]
When you act on your vector
[tex]\begin{bmatrix}0\\1\\-1\\0\end{bmatrix}[/tex]
with the 4x4 matrix
[tex]\begin{pmatrix}\alpha & \beta \\ -\bar \beta & \bar \alpha\end{pmatrix}\otimes\begin{pmatrix}\alpha & \beta \\ -\bar \beta & \bar \alpha\end{pmatrix},[/tex]
you should see the invariance clearly.
 
  • #6
Oxvillian said:
I think maybe the confusion here evaporates once you realize that the tensor product of two vectors is still a vector, and not a matrix.
This is not true. A tensor product as element of a tensor algebra is of course a vector again. However, to say it is not a matrix is highly misleading, for it bears more misconceptions than it will resolve.
 
  • #7
Hi fresh_42!

I'm not sure which part you're objecting to - can you explain?

The tensor product in question is just the product of the two original vector spaces - objects in that space are most certainly vectors!

Now you're certainly free to arrange the components of those vectors in the rows and columns of a matrix. But this is precisely what led to the original OP's confusion, when matrix multiplication was used inappropriately. Had he or she written out explicitly the 4x4 group representatives (now those really are matrices!) and the full singlet 4-vector from my last post, I think the confusion might have been avoided.
 
  • #8
Oxvillian said:
Hi fresh_42!

I'm not sure which part you're objecting to - can you explain?

The tensor product in question is just the product of the two original vector spaces - objects in that space are most certainly vectors!
Yes, but your arrangement is far more artificial than it is to write a matrix. E.g. a bilinear form ##v^* \otimes w^* : (x,y) \mapsto v^*(x) \otimes w^*(y)## can be perfectly seen as ##x^t \cdot \textrm{ matrix of } (v^* \otimes w^*) \cdot y##. It comes in naturally and if you write it as a column you run into one source of errors after the other. I don't say it can't be done, as in the end you can arrange your components whatever you like. But I definitely cannot recommend it. And I do not want to clean up the mess such a notation would produce here. In the end it is a kind of multiplication and a linear notation isn't useful. Moreover, I've never seen such a mess before. ##v \otimes w## is a matrix of rank ##1## and an arbitrary matrix is a sum of those. A notation in a single column with strange rules to fill the components is nonsense. Try it: Let's keep vectors as comlumns. Then ##v \otimes w = v \cdot w^t## is the ordinary matrix multiplication without any additional definitions how to construct the entries.
Now you're certainly free to arrange the components of those vectors in the rows and columns of a matrix. But this is precisely what led to the original OP's confusion, when matrix multiplication was used inappropriately. Had he or she written out explicitly the 4x4 group representatives (now those really are matrices!) and the full singlet 4-vector from my last post, I think the confusion might have been avoided.
This isn't true either. The confusion comes from three different representation spaces, and from an identification of ##\begin{bmatrix}0\\1\end{bmatrix} \otimes \begin{bmatrix}1\\0\end{bmatrix}-\begin{bmatrix}1\\0\end{bmatrix} \otimes \begin{bmatrix}0\\1\end{bmatrix}## with an element of ##SU(2)## or ##\mathfrak{su}(2)## which it incidentally is. But it's clearly said, that the element is considered a vector of ##V^1\otimes V^1## and not an element of ##SU(2)## or ##\mathfrak{su}(2)##. (O.k. I admit I've found ##V^1=\mathbb{C}^2## in Woit's book which he didn't link to.) Now once it is clear what the representation space is, there is no second interpretation of the stated invariance. How coordinates are arranged wasn't the issue.
 
Last edited:
  • #9
if you write it as a column you run into one source of errors after the other. I don't say it can't be done, as in the end you can arrange your components whatever you like. But I definitely cannot recommend it. And I do not want to clean up the mess such a notation would produce here. In the end it is a kind of multiplication and a linear notation isn't useful. Moreover, I've never seen such a mess before.

Ha it's really not that bad! See for instance the chapter on addition of angular momentum from Shankar's book, where the addition of 2 spins is done like this. It's also kind of necessary to think this way when we block diagonalize the 4x4 matrix and extract the irreducible representations.

fresh_42 said:
v⊗wv⊗wv \otimes w is a matrix of rank 111 and an arbitrary matrix is a sum of those. A notation in a single column with strange rules to fill the components is nonsense. Try it: Let's keep vectors as comlumns. Then v⊗w=v⋅wtv⊗w=v⋅wtv \otimes w = v \cdot w^t is the ordinary matrix multiplication without any additional definitions how to construct the entries.

The usual direct product rules are
[tex]\pmatrix{a\\b}\otimes \pmatrix{x & y} = \pmatrix{ax & ay\\bx & by}[/tex]
[tex]\pmatrix{a\\b}\otimes \pmatrix{x\\y} = \pmatrix{ax\\ay\\bx\\by}[/tex]

fresh_42 said:
This isn't true either. The confusion comes from three different representation spaces, and from an identification of [01]⊗[10]−[10]⊗[01][01]⊗[10]−[10]⊗[01]\begin{bmatrix}0\\1\end{bmatrix} \otimes \begin{bmatrix}1\\0\end{bmatrix}-\begin{bmatrix}1\\0\end{bmatrix} \otimes \begin{bmatrix}0\\1\end{bmatrix} with an element of SU(2)SU(2)SU(2) or su(2)su(2)\mathfrak{su}(2) which it incidentally is. But it's clearly said, that the element is considered a vector of V1⊗V1V1⊗V1V^1\otimes V^1 and not an element of SU(2)SU(2)SU(2) or su(2)su(2)\mathfrak{su}(2). Now once it is clear what the representation space is, there is no second interpretation of the stated invariance. How coordinates are arranged wasn't the issue.

I think this confusion might be just yours though :smile:

The object you quote represents the quantum state of two spin 1/2 objects - that's why it's clear that it must be a vector in the representation space.

It's interesting to read your point of view! Is your background maybe more pure mathematical than physical?
 
  • #10
Oxvillian said:
The object you quote represents the quantum state of two spin 1/2 objects - that's why it's clear that it must be a vector in the representation space.
And in which: ##GL(\mathbb{C}^2)## or ##GL(\mathfrak{su}(2))##? Both are valid representations of ##SU(2)##. And ##GL(SU(2))## is a third representation. And these are only the "natural" ones. It's pretty sloppy to talk of representations or generators without actually telling which one is meant. I know the hand waving sloppiness of physicists here, but this isn't a good way to learn them.
 
  • #11
[itex]\mathbb{C}^2[/itex], of course! That's where electron spins live.

I totally plead guilty to being a sloppy hand waving physicist.

:oops:
 
  • #12
fresh_42 & Oxvillian,

many thanks for your discussion, which helps me to gain different angles and views on this subject!

I've also read fresh_42's insight, which I'm finding very useful. If, as you say, SU(n) representations were always noted using a triplet ##(SU(n),V,\phi)## that would certainly make life of learners like me easier (although I guess it would result boring for those who are already well versed in the subject matter).

I think I'll spend some more time studying and reviewing this material, and then proceed with Prof. Woit's notes.

Thanks again

Gianni
 

1. What does "invariance under SU(2)" mean in quantum mechanics?

Invariance under SU(2) in quantum mechanics refers to the property of a system remaining unchanged under transformations involving the SU(2) group. This group consists of special unitary matrices with a determinant of 1 and is commonly used in the study of quantum physics.

2. Why is invariance under SU(2) important in quantum mechanics?

Invariance under SU(2) is important in quantum mechanics because it allows us to describe and understand the behavior of particles and systems under rotations in three-dimensional space. This is essential in many physical phenomena, such as the behavior of atoms and subatomic particles.

3. How is invariance under SU(2) related to the concept of angular momentum?

In quantum mechanics, angular momentum is described by operators that are invariant under SU(2) transformations. This means that the angular momentum of a system remains constant, regardless of the orientation of the system in space. In other words, the laws of physics governing angular momentum are the same in all directions.

4. Can you give an example of a system that exhibits invariance under SU(2) in quantum mechanics?

A common example of a system that exhibits invariance under SU(2) is an electron in a hydrogen atom. The laws of physics governing the motion of the electron and its angular momentum remain the same, regardless of the direction in which the atom is oriented in space.

5. What are the applications of invariance under SU(2) in quantum mechanics?

Invariance under SU(2) is used extensively in quantum mechanics to understand and predict the behavior of particles and systems in various physical phenomena, such as atomic and molecular structure, nuclear physics, and quantum computing. It also plays a crucial role in the development of quantum field theory and the Standard Model of particle physics.

Similar threads

Replies
3
Views
870
  • Advanced Physics Homework Help
Replies
8
Views
752
  • Advanced Physics Homework Help
Replies
9
Views
879
  • Advanced Physics Homework Help
Replies
14
Views
1K
  • Advanced Physics Homework Help
Replies
13
Views
1K
  • Advanced Physics Homework Help
Replies
6
Views
2K
  • Advanced Physics Homework Help
Replies
3
Views
1K
  • Advanced Physics Homework Help
Replies
12
Views
2K
  • Advanced Physics Homework Help
Replies
9
Views
1K
  • Advanced Physics Homework Help
Replies
13
Views
1K
Back
Top