Linearisation of Lie Group

1. May 18, 2014

CAF123

Higher dimensional groups are parametrised by several parameters (e.g the three dimensional rotation group SO(3) is described by the three Euler angles). Consider the following ansatz: $$\rho_1 = \mathbf{1} + i \alpha^a T_a + \frac{1}{2} (i\alpha^a T_a)^2 + O(\alpha^3)$$
$$\rho_2 = \mathbf{1} + i \beta^b T_b + \frac{1}{2} (i\beta^b T_b)^2 + O(\beta^3)$$
$$\rho_3 = \mathbf{1} + i \gamma^c T_c + \frac{1}{2} (i\gamma^c T_c)^2 + O(\gamma^3),$$ where summation over indices $a,b,c$ is understood and $a,b,c = 1 \dots \text{dim(Lie Algebra)},$(e.g for SO(3), a,b,c = 1,..3 the three Euler angles).

I don't really understand what these equations are saying - could someone explain? My thoughts are that the $\rho_i$ are rotation matrices in 3-space, where the rotation is about an axis $\underline{n} = \langle \alpha^1, \alpha^2, \alpha^3 \rangle$, the $\alpha^i$ specifying a choice of Euler angles. In which case, the $T_a$ would be the generators of the SO(3) Lie algebra and the $\rho_i$ are elements of the group SO(3)?

Many thanks.

2. May 18, 2014

Einj

What you are asking is basically the difference between an element belonging to the group or to its algebra (of course if an element belongs to the algebra it belongs to the group itself).

Let's start with a few definitions. Suppose you have a generic Lie group $\mathcal{G}$, depending on a set of continuous parameters $\vec{\alpha}$. One also suppose that the parameters are defined such that when they are all zero then all the elements of the group reduce to just the identity, $g(\vec\alpha=0)=1$, and the same for every representation, $D(\vec\alpha)$. Then the definition of generator of the group is:
$$\left.T^a=-i\frac{\partial D(\vec\alpha)}{\partial \alpha_a}\right|_{\vec\alpha=0}.$$

This means that, starting from identity, you can form the so-called algebra of the group by taking every possible infinitesimal displacement starting from the identity:
$$D(\vec\alpha)=1+i\alpha^aT^a.$$

The idea is that you can generate every possible element of the group by composing infinitesimal displacements from the identity, i.e. you can write the elements of the group as:
$$D(\vec\alpha)=e^{i\alpha^aT^a}.$$

From a geometrical point of view you are saying that the Lie group can be thought as a curved space such that the algebra is its locally flat region around the origin (identity).

3. May 18, 2014

CAF123

Hi Einj,
Is this reminiscent of a Taylor expansion about a point? E.g the value of the parameter $\alpha$ near the origin (identity) is given by $$D(\vec \alpha \approx 0) = D(\vec \alpha = 0) + \frac{\partial D(\vec \alpha)}{\partial \alpha^a} \alpha^a + O((\alpha^a)^2)$$

Is there any reason why $T^a$ is defined like that? (It is clear from that definition though that it generates the tangent space of the corresponding group manifold).

So this is the linearization of the manifold, corresponding to the Lie algebra.

So, at the case at hand, $D(\vec \alpha)$ represent three linearly independent rotation matrices in 3-space about axes specified by $\vec{n_1} = \langle \alpha^1, \alpha^2, \alpha^3 \rangle, \vec{n_2} = \langle \beta^1, \beta^2, \beta^3 \rangle, \vec{n_3} = \langle \gamma^1, \gamma^2, \gamma^3 \rangle$ where $\alpha^i, \beta^i, \gamma^i$ are Euler angles specifying the orientation of the axes in 3-space? Is that what the $\rho_i$ represent in the OP?

Thanks.

4. May 18, 2014

Einj

The reason is exactly that one. In this way you can think at the generators $T^a$ as the "unit vectors" of your flat algebra. By taking suitable combinations $\alpha^aT^a$ of this generators you can span all this flat space.

Yes, the Lie algebra is defined as the flat region around the origin of the manifold, which is the Lie group.

Yes, but keep in mind that the definition given above of Lie algebra, generators and so on, can be more general. It is valid for every vector space. The Euler angles are the "directions" of the algebra when you are considering a representation of the rotation group acting on the usual three dimensional vector space, $\vec v=(v_x,v_y,v_z)$. The definition given above can be applied to representations of the group acting on every possible vector space. For example, in Quantum Mechanics, you can define the action of the rotation group on the quantum kets belonging to the Hilbert space. In that case the generators of the rotation group are the total angular momentum operators.

5. May 18, 2014

CAF123

Thanks Einj, that was very helpful.
In that case, what are the corresponding continuous group parameters $\alpha$? Could they be angles too?

I was wondering if you could help me make sense of the following statement that ties in (perhaps) with what you wrote above: ''Consider the three dimensional vectors $\vec{v} \in \mathbf{R}^3$ which are $\ell = 1$ irreducible representations of the rotation group $SO(3)$. A scalar (=invariant) corresponds to a $\ell=0$ irreducible representation of $SO(3)$''.

The fundamental representation of SO(3) (corresponding to total ang mom =1) are simply the rotation matrices discussed above (I think). A j=1 irrep permits three linearly independent directions (m = -1,0,1) and so the state space is spanned by |1,1>, |1,-1> and |1,0>. In R^3, I think these can be represented by the usual Cartesian unit vectors. I am struggling to see how the statements in quotations above come about. Does any of what I wrote here help?

Thanks.

6. May 18, 2014

Einj

Yes, you are right. Let's make a step back. In Quantum Mechanics you usually use kets to describe your system. The idea of introducing the kets is exactly that of working with something which is independent on the chosen basis.

As far as you first question in concerned, yes, the continuous parameters can be angles. In fact, one usually writes a generic rotation by an angle $\theta$ around an axis $\hat n$ as:

$$R(\theta,\hat n)=e^{i\theta\hat n\cdot \vec J},$$
where $J_i$ are the total angular momentum operators (which may be expressed in any possible basis). It is clear that, for example in 3-dimensions, you can characterize this rotation with just 3 angles $(\theta n_1,\theta n_2,\theta n_3)$ about the usual directions $(J_x,J_y,J_z)$.

Now, the idea is that, in your 3-dimensional case, you can abandon your basis-independent kets to use a specific basis, the usual $(\hat e_x,\hat e_y,\hat e_z)$. In this case you usually identify $|1,1\rangle\equiv (1,0,0)$, $|1,0\rangle\equiv(0,1,0)$ and $|1,-1\rangle\equiv(0,0,1)$. You can do that exactly because the $\ell=1$ states are 3-dimensional and therefore you can map each of them to an ordinary cartesian vector belonging to $R^3$.

The point is that, once you do that, the operators $\vec J$ that I wrote before must be expressed in the same basis. This means that they will be represented by three 3x3 matrices (you can find their explicit form here: http://en.wikipedia.org/wiki/Pauli_matrices).

It is clear that if a ket belongs to the $\ell=0$ representation it is one dimensional and therefore can be mapped into a scalar which is clearly rotationally invariant. In the Lie algebra language this means that the generator of the rotations in the $\ell=0$ representation is just $T=0$ such that $D(\vec\alpha)=1$.

7. May 18, 2014

CAF123

I am wondering if you meant to say here 3x3 matrices that correspond to rotations in 3 dimensions. (e.g rotation about x component of angular momentum $J_x$ can be represented by the 3x3 matrix corresponding to a rotation in R^3 about the Cartesian x axis)

The j=1/2 irrep has 2 linearly independent directions corresponding to a scaled version of the SU(2) Pauli representation that I think you mentioned above.

Is there a reason why we associate j=1/2 to SU(2) and not SO(2) for example? I also have in my notes in that the half integer values of j correspond to SU(2) while the integer values of j correspond to SO(3). What is the reason for this?

Thanks again!

8. May 18, 2014

Matterwave

I'd like to just point out a technicality here. You can only generate the connected component of the identity in this fashion. Disconnected groups, for example like O(3) will not be able to be fully constructed this way. In fact O(3) has the exact same algebra as SO(3), and using this procedure will generate SO(3), the connected component of the identity of O(3). (You won't be able to get the matrices with determinant -1 if you try this procedure.)

9. May 18, 2014

Einj

No, in this case the J matrices are the generators. They are constants, independent on what rotation you are performing. The different rotations will be characterized by the different coefficients you use.

To be honest I don't remember that, I'm sorry. If I remember correctly the SU(2) is isomorphic to SO(3) but I'm not super sure. You can check that on Georgi's book "Lie Algebras in particle physics", it is a very complete reference for group theory and it's reasonably rigorous.

Yes you are right. I should have said that. Thanks!

10. May 18, 2014

CAF123

Hi Matterwave,
'By exponentiating the Lie Algebra, one obtains the connected part of the Lie Group, which is it say the part connected to the identity'.
So, in O(3), if the Lie Algebra is like $D(\alpha) = 1 + i\alpha^a T^a$, then by exponentiation, we get back only the part connected to the identity. Could you explain this last statement and what makes the matrices with determinant 1 the connected part?

Thanks.

11. May 18, 2014

CAF123

Are the Pauli matrices not 2x2 though? Sorry, what I meant was that the generators $J_i$ correspond identically to the rotation matrices in R^3 about rotations of the x,y,z axes?

I.e if $T_i$ label the 3 generators of SO(3) Lie algebra, then there is an association $J_i \equiv T_i$.

Thanks.

12. May 18, 2014

Einj

Yes, correct. The Pauli matrices are 2x2 because they belong to a different representation. In the j=1 representation you can represent the operators J by means of 3x3 matrices, in the j=1/2 by means of 2x2 matrices. The basic idea is still the same though.

13. May 18, 2014

CAF123

Thanks Einj, I have one last question for the time being.
Another statement from my notes: 'The representation $\rho$ themselves are given by $$\rho_j (\alpha)_{mm'} = \langle j,m | e^{ i \alpha^a T^a} | j,m' \rangle,$$ where $j$ denotes a representation, $\alpha$ enumerates the group elements and $m,m'$ parametrise the $(2j+1) \times (2j+1)$ dimensional matrix.

Is this just saying the $mm'$ entry of the representation matrix is given by the inner product on the RHS?

It gives the following example (The Wigner D-matrix) $$D^j (\alpha, \beta, \gamma)_{mm'} = \langle j,m | e^{-i\alpha J_z} e^{-i\beta J_y} e^{-i\gamma J_z}|j,m' \rangle$$

14. May 18, 2014

Einj

Yes absolutely. It's the usual thing: to compute the matrix elements of some operator you just take the inner product of these operator between two basis vector. Of course you must be careful to use the basis vectors of the right representation. Since you operator is defined as belonging to the representation j then you must take the kets |j,m_j>.

15. May 18, 2014

Matterwave

O(3) is the group of rotations in three space plus the "improper rotations", meaning the reflections which flip your orientation. In the latter case, acting canonically on $\mathbb{R}^3$, you will have matrices of determinant -1. SO(3) only consists of the proper rotations, which, in the same representation will all be matrices of determinant +1.

You can't smoothly construct a path from the part of O(3) of determinant +1 to the part of O(3) with determinant -1. In this way, O(3) is said to be a disconnected group.

Imagine if you can the group space as a manifold, O(3) would look like two pieces that are separated rather than one piece.

Another way to think about it, given an orthogonal matrix of determinant +1, call it $R(0)$, you can't parametrize a continuous path that stays in the orthogonal matrices $R(t)$ (in other words, we require that $R(t)\in O(3),\forall t$) such that $R(t=T)$ has determinant -1 for ANY T. This is because the determinant of a continuous set of matrices is a continuous function, and the determinant for an orthogonal matrix is either +1 or -1 (no values in between), and a continuous function cannot jump from +1 to -1.

@Einj: Regarding your uncertainty of SO(3) and SU(2). SU(2) is a double covering of SO(3), they are not isomorphic. SU(2) is isomorphic to the 3-sphere, and SO(3) is SU(2) with opposite points on the 3-sphere identified. They are not isomorphic because SO(3) is not simply connected, while SU(2) is. They are very similar, but this property leads to the weird property in quantum mechanics that a rotation of $2\pi$ gives you a minus sign in your spin state rather than giving you the identity.

16. May 18, 2014

CAF123

Thanks Matterwave, can you help with the following questions?
Ok, so upon exponentiation, we obtain back all matrices of determinant one since the identity matrix is of determinant one.

I have read that the last sentence has some connections with the existence of fermions. Could you explain this further?

Edit: Einj, Matterwave I have some questions to do with tensor operators. Is it okay to post here or would you prefer it if I post in a new thread?

17. May 18, 2014

Einj

You should probably create a new thread

18. May 18, 2014

micromass

I don't really see why, it's his thread so it's not like he's hijacking it or anything

19. May 18, 2014

Einj

Yes but other people might be interested in the new topic.

20. May 18, 2014

Matterwave

SO(2) would not be a good candidate to use to represent rotations in 3-space because it has only 1 parameter, and therefore only 1 generator (dimension 1). It's the rotations about 1 particular axis (it's a circle).

Whether we want to use SO(3) or its covering SU(2) is a different question, the answer to which is a bit subtle. I'm probably not the best person to give you the answer to this question right now, since it's been a while since I've examined this problem. But it's at least natural to use SU(2) since our state vectors are vectors over a complex field. SU(2) naturally acts on 2 component complex vectors. SO(3) acts naturally on 3-component real vectors. One can of course construct a group action of SO(3) on 2 component complex vectors (at least, I think we should be able to, I have not tried to do so myself lol) but it's not "natural" so to speak.

That's just a motivation for why we might use SU(2), but it's not rigorous.

21. May 21, 2014

CAF123

Matterwave, I read another thread that you replied to whereby you say that SU(2) is diffeomorphic to the 3 sphere (which I understand) and that SO(3) is isomorphic to the three sphere with antipodal points identified. Could you explain this last statement?

Thanks.

22. May 21, 2014

Matterwave

Intuitively thinking, there are 2 times as many elements in SU(2) as there are in SO(3) (actually both have an uncountably infinite number of them so it's not really good to talk about the number of elements, but I'm just appealing to intuition here). SU(2) is a double covering of SO(3) which means that there is a 2 to 1 mapping of SU(2) onto SO(3). This mapping is defined as taking the two anti-podal points on the 3-sphere and identifying it with 1 element of SO(3). In other words, there are elements in SU(2), the 3-sphere, which represent 1 rotation in SO(3).

To be specific, we would have to show how this mapping is done. But that is a little bit more involved, and you should be able to find this mapping in many Lie group theory books. A book I like is John Stillwell's Naive Lie Theory, which is good because he doesn't require the reader to know differential geometry, but still develops most of the uses of Lie theory (he has to stick with matrix Lie groups because of this restriction, but those are the ones we are talking about most often anyways).

Share this great discussion with others via Reddit, Google+, Twitter, or Facebook