1. Limited time only! Sign up for a free 30min personal tutor trial with Chegg Tutors
    Dismiss Notice
Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

Invariance under SU(2) in quantum mechanics

  1. May 3, 2017 #1
    1. The problem statement, all variables and given/known data
    Hi,

    I'm trying to self-study quantum mechanics, with a special interest for the group-theoretical aspect of it. I found in the internet some lecture notes from Professor Woit that I fouund interesting, so I decided to use them as my guide. Unfortunately I'm now stuck at a point which suggests I'm perhaps not as fluent with a couple of concepts - namely "invariance" and "group action" - as I thought. Hopefully someone will be able to help me and suggest the best way to progress.

    It all started with a discussion about the tensor product of vector spaces, and the application of this concept to spinors. In order to give a practical example, the notes consider the tensor product of two spinors ##V^1 \otimes V^1##. Taking ##\begin{pmatrix}1 \\ 0\end{pmatrix}## and ##\begin{pmatrix}0 \\ 1\end{pmatrix}## as a basis for ##V^1##, a basis for the space resulting from the tensor product is easily found by taking all possible tensor products of the two basis vectors. So far, so good.

    At this point, the notes make the following statement: "one can show that the vector:

    $$\frac 1 {\sqrt 2}(\begin{pmatrix}1 \\ 0\end{pmatrix} \otimes \begin{pmatrix}0 \\ 1\end{pmatrix} - \begin{pmatrix}0 \\ 1\end{pmatrix} \otimes \begin{pmatrix}1 \\ 0\end{pmatrix}) \in V^1 \otimes V^1$$

    is invariant under SU(2), by computing either the action of SU(2) or of its Lie algebra su(2)".

    2. The attempt at a solution

    Now, in order to understand all this, I first started to explicitly compute the value of the vector which, unless I'm wrong, is:

    $$\frac 1 {\sqrt 2}(\begin{pmatrix}1 \\ 0\end{pmatrix} \otimes \begin{pmatrix}0 \\ 1\end{pmatrix} - \begin{pmatrix}0 \\ 1\end{pmatrix} \otimes \begin{pmatrix}1 \\ 0\end{pmatrix}) = \frac 1 {\sqrt 2}\begin{pmatrix}0 & 1 \\-1 & 0\end{pmatrix}$$

    The next step, in my mind, was to take a generic element of SU(2), i.e. the matrix:

    $$\begin{pmatrix}\alpha & \beta \\ -\bar \beta & \bar \alpha\end{pmatrix}$$

    (with ##\alpha\bar\alpha + \beta\bar\beta = 1##), and verify that, by multiplying it with the vector results in the vector itself, i.e. to verify that:

    $$\frac 1 {\sqrt 2}\begin{pmatrix}\alpha & \beta \\ -\bar \beta & \bar \alpha\end{pmatrix} \begin{pmatrix}0 & 1 \\-1 & 0\end{pmatrix} = \frac 1 {\sqrt 2}\begin{pmatrix}0 & 1 \\-1 & 0\end{pmatrix}$$

    However, by doing the calculations I get:

    $$\frac 1 {\sqrt 2}\begin{pmatrix}\alpha & \beta \\ -\bar \beta & \bar \alpha\end{pmatrix} \begin{pmatrix}0 & 1 \\-1 & 0\end{pmatrix} = \frac 1 {\sqrt 2}\begin{pmatrix}-\beta & \alpha \\-\bar\alpha & -\bar\beta\end{pmatrix}$$

    whihch doesn't look like the initial vector. I also tried to see if conjugation would work, but:

    $$\frac 1 {\sqrt 2}\begin{pmatrix}\alpha & \beta \\ -\bar \beta & \bar \alpha\end{pmatrix} \begin{pmatrix}0 & 1 \\-1 & 0\end{pmatrix}{\begin{pmatrix}\alpha & \beta \\ -\bar \beta & \bar \alpha\end{pmatrix}}^{-1}$$

    doesn't seem to work as well. I'm clearly missing something here, but can't tell what.

    Can anyone help me shed some light on this subject?

    Thanks

    Gianni
     
  2. jcsd
  3. May 3, 2017 #2

    fresh_42

    Staff: Mentor

    The definition of a representation ##\pi## of ##SU(2)## on ##V^1\otimes V^1## with ##V^1 = \mathbb{C}^2## is

    ##\pi_{V^1\otimes V^1} \left( \begin{bmatrix}\alpha & \beta \\ -\bar{\beta}&\bar{\alpha} \end{bmatrix} \right) \left( \begin{bmatrix}1\\0\end{bmatrix}\otimes \begin{bmatrix}0\\1\end{bmatrix} \right) ##

    ##=\pi_{V^1} \left( \begin{bmatrix}\alpha & \beta \\ -\bar{\beta}&\bar{\alpha}\end{bmatrix} \right) \left( \begin{bmatrix}1\\0\end{bmatrix} \right) \otimes \pi_{V^1} \left( \begin{bmatrix}\alpha & \beta \\ -\bar{\beta}&\bar{\alpha}\end{bmatrix} \right) \left( \begin{bmatrix}0\\1\end{bmatrix} \right)##

    ##= \begin{bmatrix}\alpha & \beta \\ -\bar{\beta}&\bar{\alpha} \end{bmatrix} \begin{bmatrix}1\\0 \end{bmatrix} \otimes \begin{bmatrix} \alpha & \beta \\ -\bar{\beta}&\bar{\alpha} \end{bmatrix} \begin{bmatrix} 0\\1\end{bmatrix} ##

    ##= \begin{bmatrix}\alpha \\ -\bar{\beta}\end{bmatrix} \otimes \begin{bmatrix}\beta \\ \bar{\alpha}\end{bmatrix}##

    Now calculate the second term, put in the ##\frac{1}{\sqrt{2}}##, which finally cancel out, choose a way to arrange the coordinates for the tensors (don't confuse the orientation among the three times it has to be done), and remember that ##\alpha \cdot \bar{\alpha} + \beta \cdot \bar{\beta} = 1##.

    (Reminder to myself: §9.4 p.118)
     
  4. May 5, 2017 #3
    Hi,

    thanks for your reply, and apologies for my late response (I can only work on this in my spare time).

    So, I followed your instructions, and calculated the second term, which should be:

    $$\begin{pmatrix}\beta \\ \bar\alpha\end{pmatrix} \otimes \begin{pmatrix}\alpha \\ -\bar\beta\end{pmatrix}$$

    At this point, their difference is:

    $$\begin{pmatrix}\alpha \\ -\bar\beta\end{pmatrix} \otimes \begin{pmatrix}\beta \\ \bar\alpha\end{pmatrix} - \begin{pmatrix}\beta \\ \bar\alpha\end{pmatrix} \otimes \begin{pmatrix}\alpha \\ -\bar\beta\end{pmatrix} = \begin{pmatrix}\alpha\beta && \alpha\bar\alpha \\ -\beta\bar\beta && -\bar\alpha\bar\beta\end{pmatrix} - \begin{pmatrix}\alpha\beta && -\beta\bar\beta \\ \alpha\bar\alpha && -\bar\alpha\bar\beta\end{pmatrix} = \begin{pmatrix}0 && \alpha\bar\alpha+\beta\bar\beta \\ -\alpha\bar\alpha-\beta\bar\beta && 0\end{pmatrix} = \begin{pmatrix}0 && 1 \\ -1 && 0\end{pmatrix}$$

    which, multiplied by the common factor ##\frac 1 {\sqrt 2}## gives the original vector. So I believe I've got through this part.

    What I still find counter-intuitive is that the process you suggested, i.e. apply the SU(2) representation FIRST, and then performing the calculations on the resulting tensor products, should give a different result than reversing the order of operations. It looks like the representation must be "distributed" through the parts of the vector in order to get the correct result. Perhaps you can provide some comment on this?

    Another part I didn't quite understand was you point about arranging the coordinates of the tensors (the notes I'm following have not gone in too much detail about tensors, I believe, so that may be a reason...). Could you clarify?

    Thanks again for your help - it's been precious so far!

    Gianni
     
  5. May 5, 2017 #4

    fresh_42

    Staff: Mentor

    First of all, I looked up which representation Woit has meant on the tensor product, and whether the identification of ##v_1\otimes v_2## with an element of ##SU(2)## or ##\mathfrak{su}(2)## which you obviously tried, has been meant by him. Fortunately, you gave me a hint which source you might have read.

    ##SU(2)## operates as matrix transformations on ##\mathbb{C}^2##, as group via conjugation on itself and last but not least as Lie group on its Lie algebra via the adjoint representation. They are all different, although the latter are related. Since you already ruled out conjugation on ##SU(2)## and ##\mathfrak{su}(2)## I checked the natural one. One has to be careful here, because ##\mathfrak{su}(2) \ncong \mathbb{C}^2 \otimes \mathbb{C}^2 \ncong SU(2)## and although there are mappings between them, they must not be confused. (I wrote a short Insight article about this, if you like to read it: https://www.physicsforums.com/insights/representations-precision-important/)
    As I did the calculations I ended up with ##\begin{bmatrix}0&-1\\1&0\end{bmatrix}## and I realized, that one can write
    $$
    \begin{bmatrix}a\\b\end{bmatrix}\otimes \begin{bmatrix}c\\d\end{bmatrix}=\begin{bmatrix}ac&ad\\bc&bd\end{bmatrix} \textrm{ or } \begin{bmatrix}a\\b\end{bmatrix}\otimes \begin{bmatrix}c\\d\end{bmatrix}=\begin{bmatrix}ac&bc\\ad&bd\end{bmatrix}
    $$
    which doesn't make a difference as long as one remains consistent.
     
  6. May 5, 2017 #5
    Hi gnieddu!

    I think maybe the confusion here evaporates once you realize that the tensor product of two vectors is still a vector, and not a matrix. I would suggest writing
    [tex]\begin{bmatrix}a\\b\end{bmatrix}\otimes \begin{bmatrix}c\\d\end{bmatrix}=\begin{bmatrix}ac\\ad\\bc\\bd\end{bmatrix}.[/tex]
    When you act on your vector
    [tex]\begin{bmatrix}0\\1\\-1\\0\end{bmatrix}[/tex]
    with the 4x4 matrix
    [tex]\begin{pmatrix}\alpha & \beta \\ -\bar \beta & \bar \alpha\end{pmatrix}\otimes\begin{pmatrix}\alpha & \beta \\ -\bar \beta & \bar \alpha\end{pmatrix},[/tex]
    you should see the invariance clearly.
     
  7. May 5, 2017 #6

    fresh_42

    Staff: Mentor

    This is not true. A tensor product as element of a tensor algebra is of course a vector again. However, to say it is not a matrix is highly misleading, for it bears more misconceptions than it will resolve.
     
  8. May 5, 2017 #7
    Hi fresh_42!

    I'm not sure which part you're objecting to - can you explain?

    The tensor product in question is just the product of the two original vector spaces - objects in that space are most certainly vectors!

    Now you're certainly free to arrange the components of those vectors in the rows and columns of a matrix. But this is precisely what led to the original OP's confusion, when matrix multiplication was used inappropriately. Had he or she written out explicitly the 4x4 group representatives (now those really are matrices!) and the full singlet 4-vector from my last post, I think the confusion might have been avoided.
     
  9. May 5, 2017 #8

    fresh_42

    Staff: Mentor

    Yes, but your arrangement is far more artificial than it is to write a matrix. E.g. a bilinear form ##v^* \otimes w^* : (x,y) \mapsto v^*(x) \otimes w^*(y)## can be perfectly seen as ##x^t \cdot \textrm{ matrix of } (v^* \otimes w^*) \cdot y##. It comes in naturally and if you write it as a column you run into one source of errors after the other. I don't say it can't be done, as in the end you can arrange your components whatever you like. But I definitely cannot recommend it. And I do not want to clean up the mess such a notation would produce here. In the end it is a kind of multiplication and a linear notation isn't useful. Moreover, I've never seen such a mess before. ##v \otimes w## is a matrix of rank ##1## and an arbitrary matrix is a sum of those. A notation in a single column with strange rules to fill the components is nonsense. Try it: Let's keep vectors as comlumns. Then ##v \otimes w = v \cdot w^t## is the ordinary matrix multiplication without any additional definitions how to construct the entries.
    This isn't true either. The confusion comes from three different representation spaces, and from an identification of ##\begin{bmatrix}0\\1\end{bmatrix} \otimes \begin{bmatrix}1\\0\end{bmatrix}-\begin{bmatrix}1\\0\end{bmatrix} \otimes \begin{bmatrix}0\\1\end{bmatrix}## with an element of ##SU(2)## or ##\mathfrak{su}(2)## which it incidentally is. But it's clearly said, that the element is considered a vector of ##V^1\otimes V^1## and not an element of ##SU(2)## or ##\mathfrak{su}(2)##. (O.k. I admit I've found ##V^1=\mathbb{C}^2## in Woit's book which he didn't link to.) Now once it is clear what the representation space is, there is no second interpretation of the stated invariance. How coordinates are arranged wasn't the issue.
     
    Last edited: May 5, 2017
  10. May 5, 2017 #9
    Ha it's really not that bad! See for instance the chapter on addition of angular momentum from Shankar's book, where the addition of 2 spins is done like this. It's also kind of necessary to think this way when we block diagonalize the 4x4 matrix and extract the irreducible representations.

    The usual direct product rules are
    [tex]\pmatrix{a\\b}\otimes \pmatrix{x & y} = \pmatrix{ax & ay\\bx & by}[/tex]
    [tex]\pmatrix{a\\b}\otimes \pmatrix{x\\y} = \pmatrix{ax\\ay\\bx\\by}[/tex]

    I think this confusion might be just yours though :smile:

    The object you quote represents the quantum state of two spin 1/2 objects - that's why it's clear that it must be a vector in the representation space.

    It's interesting to read your point of view! Is your background maybe more pure mathematical than physical?
     
  11. May 5, 2017 #10

    fresh_42

    Staff: Mentor

    And in which: ##GL(\mathbb{C}^2)## or ##GL(\mathfrak{su}(2))##? Both are valid representations of ##SU(2)##. And ##GL(SU(2))## is a third representation. And these are only the "natural" ones. It's pretty sloppy to talk of representations or generators without actually telling which one is meant. I know the hand waving sloppiness of physicists here, but this isn't a good way to learn them.
     
  12. May 5, 2017 #11
    [itex]\mathbb{C}^2[/itex], of course! That's where electron spins live.

    I totally plead guilty to being a sloppy hand waving physicist.

    :oops:
     
  13. May 7, 2017 #12
    fresh_42 & Oxvillian,

    many thanks for your discussion, which helps me to gain different angles and views on this subject!

    I've also read fresh_42's insight, which I'm finding very useful. If, as you say, SU(n) representations were always noted using a triplet ##(SU(n),V,\phi)## that would certainly make life of learners like me easier (although I guess it would result boring for those who are already well versed in the subject matter).

    I think I'll spend some more time studying and reviewing this material, and then proceed with Prof. Woit's notes.

    Thanks again

    Gianni
     
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook

Have something to add?
Draft saved Draft deleted