Solving SO(3) Irreps: Find Eigenvectors & Eigenvalues of X_3

  • Thread starter Thread starter LAHLH
  • Start date Start date
  • Tags Tags
    So(3)
LAHLH
Messages
405
Reaction score
2
Hi,

I just wondered if someone could check my understanding is correct on this topic. I understand that to find the irreps of a group we can find the irreps of the associated Lie algebra, i.e. in the case of SO(3) find irreducible matrices satisfying the comm relations \left[X_i,X_j\right]=i\epsilon_{ijk}X_k.

I of course recognise that these are the commutation relations that the angular momentum operators satisfy in standard QM. Thus if we know what the eigenvectors/eigenvalues are for the angular momentum operators, we can choose to work in the basis of eigenvectors, thus diagonalising, say, X_3 as \langle jm'\mid X_3\mid jm\rangle=m\delta_{m'm}. What I don't really understand however is how this is an irrep of X_3 matrix, since clearly if this matrix is diagonal then it is reducible further?

Thanks
 
Physics news on Phys.org
Yes, but the remaining generators (X_1 and X_2) are not diagonal in this basis. Reducibility of a representation means that all representatives of the algebra / group elements can be simultaneously block diagonalized.

By the way, the Lie algebra of SO(3) is also the Lie algebra of SU(2). The irreps you obtain from the Lie algebra are irreps of both SO(3) and SU(2). But you actually have "more" irreps for SU(2) than for SO(3) (to be more precise: some irreps of the algebra lead to the same irrep for SO(3) but different irreps for SU(2))

This is all consequence of the fact that SU(2) is the universal covering group of SO(3) -- the double cover to be precise. In laymans terms this means that SU(2) is "bigger" than SO(3). To be more precise again: you have a group homomorphism from SU(2) to SO(3), which is two-to-one (so two elements of SU(2) are mapped to the same element of SO(3)).
 
xepma said:
Yes, but the remaining generators (X_1 and X_2) are not diagonal in this basis. Reducibility of a representation means that all representatives of the algebra / group elements can be simultaneously block diagonalized.

Thanks for the quick reply. OK that makes sense and shows me the rep isn't obviously reducible as I first thought when I saw one element was. However I am still wondering how we know this is the way to find the irreps? What does going into the basis of eigenvetors of X_{3} have to do with this? why does it guarantee an irrep?
 
After thinking about this somemore, would I be correct in saying the reason expressing the generators in this basis of \mid j,m\rangle forms an irrep, is that the operators J_x, J_y and J_z act on this basis simply by transforming one basis vector into a linear combo of other basis vectors in this set (i.e. same value of j, but different m's. In the conventional basis J_z\mid j,m\rangle=m\mid j,m\rangle but Jx or Jy will transform such a basis vector into combo of ones with same j diff m's). Thus the 2j+1 vector space is irreducible with no submodules?
 
An irreducible representation is a representation that is irreducible. It's a representation if the generators take basis vectors to linear combinations of other basis vectors. It's irreducible if you can "get from one basis vector to another" by some applying some sequence of linear combinations of generators. As you say, the generators take vectors with a given value of j to other vectors with the same value of j, so the set of all vectors with a given value of j is a representation. The fact that this representation is irreducible is easiest to see in the standard basis:

The convenient thing about thinking in a basis in which, say, J_z is diagonal is that there exist the raising and lowering operators J_\pm = J_x \pm i J_y, which take an eigenstate of J_z to one with a higher or lower eigenvalue. So take any state and hit it a bunch of times with J_+, until you get a "maximal" state, which, if you hit it with J_+ again, gives 0. Then take this state and hit it with J_- a bunch of times, generating a sequence of states down to a "minimal" state, which, if you hit it again with J_-, gives 0. If you take this sequence of states as a basis you get an irreducible representation, because you can transform the states into each other by repeatedly applying J_+ or J_- (and you know the sequence of states are all orthogonal to each other because they have different eigenvalues of J_z).
 
Last edited:
The_Duck said:
An irreducible representation is a representation that is irreducible. It's a representation if the generators take basis vectors to linear combinations of other basis vectors. It's irreducible if you can "get from one basis vector to another" by some applying some sequence of linear combinations of generators. .

Thanks alot, this is finally clicking into place now I think. One question I have on this point, is why does it have to be irreducible under a "linear combo" (i.e. the J_{\pm}) not just the generators themselves (i.e. Jx,Jy,Jz). Since what if some submodule of the 2j+1 vectors was closed under just Jx,Jy,Jz, whereas as you discussed J_{\pm} can take us from any of the 2j+1 vecs to any other.

Hope that makes sense...
 
Well I guess if a subspace is closed under all the generators it is also closed under linear combinations of the generators, yes?
 
If it's closed under the generators, it's closed under the action of the Lie algebra (it's the latter which you want). Whatever set of generators you take doesn't matter for the closure.

Just like Duckie said ;)
 
Although the construction is correct, there is a minor pedantic point one might want to wonder about. The lie algebra \mathfrac{su}(2) is the set of hermitian** traceless 2\times 2 matrices, a real vector space. J_{\pm} are complex linear combinations of the generators, thus they are not part the lie algebra (or in other words they are not hermitian). Thus one cannot think of going from (J_x, J_y, J_z) to (J_+, J_-, J_z) as a change of basis in \mathfrac{su}(2).

**: Or anti-hermitian, depending on if one uses the definition of mathematicians or physicists.
 
  • #10
I've been confused by this in the past; my conclusion was that when in physics we talk about su(2) we're usually talking about the complexification of su(2) which is the same as sl(2, C) and contains J_\pm. Is this accurate?
 
  • #11
The_Duck said:
I've been confused by this in the past; my conclusion was that when in physics we talk about su(2) we're usually talking about the complexification of su(2) which is the same as sl(2, C) and contains J_\pm. Is this accurate?

Yes exactly, complexification is the key to understand this. As you said the complexification of \mathfracl{su}(2) is isomorphic to \mathfrak{su}(2)_{\mathbb C}\approx\mathfrak{sl}(2,\mathbb C). On can prove that a finite dimensional representation of a real Lie algebra \mathfrak g can be uniquely extended to the complexification \mathfrak g_{\mathbb C} and it satisfies
\pi(X + iY) = \pi(Y) + i\pi(Y), X, Y\in\mathfrak g.
Furthermore irreducibility is preserved, thus using \mathfrak{sl}(2,\mathbb C) is okay. The reason is to do it, is that we have J_{\pm} which makes life simpler.

You are not the only person that used to be confused about this, physics books are usually vague about these points. I once read a book saying that J_{\pm} was in the group SU(2), while this clearly wasn't true! (A similar construction is used for the Lorentz group SO(3,1) and the standard physicist presentation of it is much more problematic. At least I found it very confusing!)
 
Last edited:
Back
Top