Is the Tensor Product of SU(2) Representations Reducible?

  • Thread starter Thread starter parton
  • Start date Start date
  • Tags Tags
    Representations
parton
Messages
79
Reaction score
1
Hi,

I have a problem. Consider the representation of SU(2) which maps every U \in SU(2) into itself, i.e. U \mapsto U, and the vector space is given by \mathbb{C}^{2} with the basis vectors e_{1} = (1,0) and e_{2} = (0,1)

How do I show that the tensor product (Kronecker) of the representation with itself on V \otimes V is reducible?

Unfortunetly I don't know how to do that. Has anyone an idea?
 
Physics news on Phys.org
By showing it is not simple, which is, um, simple...
 
I forgot to write that I need to know the invariant subspaces explicitly. And I only had (very) basic representation theory, so I don't know very much about it.

I thought that I have to find a similarity transformation so that every product assumes a block-diagonal form. But I absolutely don't know how to find such a transformation. Maybe it has something to do with the Pauli matrices which are the generators of the su(2) algebra. And I know that every matrix in SU(2) can be expressed by: exp(i \vec{\sigma} \cdot \vec{\alpha}/2) where \vec{\alpha} = (\alpha_{1}, \alpha_{2}, \alpha_{3}) describes a set of parameters. But I don't think that it could be useful to solve this problem.
 
Write down an invariant subspace and appeal to complete reducibility. Just play with it and think for a while.
 
ohh, I first used the wrong definition of the product of representations, therefore I was completely unable to find an invariant subspace.

Ok, now I've found one: W = span \left( e_{1} \otimes e_{2} - e_{2} \otimes \e_{1} \right) = span \left( \begin{pmatrix} 0 \\ 1 \\ -1 \\ 0 \end{pmatrix} \right).

Ok, but now I don't know what to do further. The complementary space seems to be not invariant, so I don't know what to do?
 
parton said:
Ok, but now I don't know what to do further. The complementary space seems to be not invariant, so I don't know what to do?

"The" complementary space? Any non-trivial subspace of a vector space has <i>infinitely many</i> subspaces. Why do you need to find one explicitly? You know the theorem of complete reducibility holds, right? If not, try again at playing around with things. What you've written down is called many things such as the second exterior power or the <i>antisymmetric</i> part of the tensor space. What might the symmetric part be?
 
I don't know the theorem of complete reducibility. But I hope I can solve it without it :-)

I think the symmetric part would be:

span \left( e_{1} \otimes e_{2} + e_{2} \otimes e_{1} \right) = span \left( \begin{pmatrix} 0 \\ 1 \\ 1 \\ 0 \end{pmatrix} \right).

I just thought that if W is an invariant subspace then the orthogonal complement
W^{\bot} = span \left( \begin{pmatrix} 1 \\ 0 \\ 0 \\ 0 \end{pmatrix} \right), \left( \begin{pmatrix} 0 \\ 0 \\ 0 \\ 1 \end{pmatrix} \right), \left( \begin{pmatrix} 0 \\ 1 \\ 1 \\ 0 \end{pmatrix} \right)
is it too, but here it seems not to be. But I don't see how to find other invariant subspaces explicitly.

Maybe my approach isn't good. I took a matrix from SU(2) and wrote it like follows:

U = \begin{pmatrix} a &amp; b \\ - b* &amp; a* \end{pmatrix}

Then I calculated the product U \otimes U and I obtained a 4 dimensional matrix. There I saw that e_{1} \otimes e_{2} - e_{2} \otimes e_{1} is an eigenvector of this matrix. But now I've got stuck and don't know how to go on and "play" around with things.
 
You have found part of the symmetric component. Now try to find some more things.

Why are you stuck? Why can't you just think what happens when you apply group elements to some elementary vectors? What happens if you apply a generic element of SU_2 to, say, v\otimes v for any vector v? Just as a vector subspace does not have a unique complement, it does not have a unique orthogonal complement. Orthogonality is not an inherent characteristic, it is the product of a choice of an inner product, and there are infinitely many of those. If your inner product were SU-2 invariant then you would have something interesting...
 
Ok, I took a general matrix of SU(2):

U = \begin{pmatrix} a &amp; b \\ - b^* &amp; a^* \end{pmatrix}

and calculated U \otimes U which is:

\begin{pmatrix} a^{2} &amp; ab &amp; ba &amp; b^{2} \\ - ab^* &amp; \vert a \vert^{2} &amp; - \vert b \vert^{2} &amp; ba^* \\ - b^* a &amp; - \vert b \vert^{2} &amp; \vert a \vert^{2} &amp; a^* b \\ (b^*)^{2} &amp; -b^* a^* &amp; - a^* b^* &amp; (a^*)^{2} \end{pmatrix}.

And I applied it to some vectors, for example e_{1} \otimes e_{1}, e_{1} \otimes e_{2}, ..., but I don't "see" anything special.

Now I took a general vectors v = \begin{pmatrix} x \\ y \end{pmatrix} and calculated: v \otimes v = \begin{pmatrix} x^{2} \\ xy \\ xy \\ y^{2} \end{pmatrix}

Now I apply U \otimes U to v \otimes v with the U defined above.

(U \otimes U) (v \otimes v) = \begin{pmatrix} a^{2} x^{2} + 2 abxy + b^{2} y^{2} \\ - ab^* \, x^{2} + xy ( \vert a \vert^{2} - \vert b \vert^{2} ) + b a^* \, y^{2} \\ - ab^* \, x^{2} + xy ( \vert a \vert^{2} - \vert b \vert^{2} ) + b a^* \, y^{2} \\ (b^*)^{2} x^{2} - 2 a^*b^* \, xy + (a^*)^{2} y^{2} \end{pmatrix}.

But what should I do with that?
 
  • #10
Boy, that's complicated.

I'm just going to use (v,v) rather than tensor, cos I can't be bothered with latex for this one.

What is g(v,v)? It is (gv,gv). You don't see anything? What if I write w=gv, so have (w,w)? I've taken something "diagonal", and gotten something "diagonal". What's the smallest subspace that contains the pairs (v,v)? What's a basis for it?

Have you looked for references to symmetric powers anywhere?
 
  • #11
Sooory, but I'm too dizzied. What do you mean that you have taken something "diagonal", and gotten something "diagonal"?

Maybe I'm too silly to understand that, but I still don't "see" anything. Could you be more explicit please? Maybe I understand it better than (at least I hope so)
 
  • #12
I can't be more specific: something of the form v\otimes v is "diagonal". What is the smallest vector subspace containing this clearly SU_2 invariant set?
 
  • #13
if I take a vector v, let's say

v = \begin{pmatrix} a \\ b \end{pmatrix}

and I consider the product:

v \otimes v = \begin{pmatrix} a^{2} \\ ab \\ ab \\ b^{2} \end{pmatrix}

I only see that the there are two entries which are identical, namely ab. The first and second entry are both squares of a resp. b. But I don't undestand what is "diagonal" here and what the smallest vectorspace is (maybe V \otimes V = \mathbb{C}^{4}?)
 
  • #14
Here's a different method.

Consider the involution S(u \otimes v) = v \otimes u. Extend to a linear map on V \otimes V. What is its minimal polynomial? What are its eigenspaces? Hint, -1 is an eigenvector and you're already written down the eigen space for that.

EQuivalently, you've already found the space A such that S(a)=-a for all a in A. You're trying to find the subspace B such that S(b)=b for all b in B. And then you must show that these are subreps and that they span all of V\otimes V.
 
  • #15
matt grime said:
I can't be more specific: something of the form v\otimes v is "diagonal". What is the smallest vector subspace containing this clearly SU_2 invariant set?

I'm currently learning representation theory and Lie algebra.

From what I have understood, the finite dimensional representations for su(2) are labelled by their spin. Since the representation is from su(2) itself, the label is spin one-half. The basis vectors can be specified as
J2 |jm> = |jm> j(j+1)
J3 |jm> = |jm> m

Am I doing something silly here?

The direct product of two spin irreducible representations j1 and j2 is always reducible!
 
Back
Top