# The representation of Lorentz group

1. Dec 8, 2013

### karlzr

The lorentz group SO(3,1) is isomorphic to SU(2)*SU(2). Then we can use two numbers (m,n) to indicate the representation corresponding to the two SU(2) groups. I understand (0,0) is lorentz scalar, (1/2,0) or (0,1/2) is weyl spinor. What about (1/2, 1/2)? I don't get why it corresponds to lorentz vector. Thanks a lot.

2. Dec 9, 2013

### fzero

This is a common mistake made in many places but it is not true. What is true is that the Lorentz algebra $so(3,1)$ is isomorphic to $su(2)\oplus su(2)$. The Lorentz group $SO(3,1)$ is noncompact, which can be seen by noting that the elements obtained from the boosts, $\exp(i a L_i)$, have a parameter $a$ which is valued in the reals, rather than a finite or periodic interval. Since $SU(2)\times SU(2)$ is compact, the groups cannot be isomorphic. What is true is that $SO(4) = [SU(2)\times SU(2)]/\mathbb{Z}_2$.

Since these are matrix groups, we can derive the representations of the group from those of the algebra. So the representation theory of $SO(3,1)$ corresponds in a particular way to that of $SU(2)\times SU(2)$, even though the groups aren't isomorphic.

To see the mapping, note that we can make a 4-vector from the identity matrix and the Pauli matrices as $\sigma^\mu = (1, \sigma^i)$. In terms of the spinors, $(\sigma^\mu)_{\alpha\dot{\alpha}}$ has an index $\alpha$ corresponding to $(1/2,0)$ and an index $\dot{\alpha}$ corresponding to $(0,1/2)$. These matrices are also invertible. We can therefore map any 4-vector $V^\mu$ to a matrix $V_{\alpha\dot{\alpha}} = (\sigma_\mu)_{\alpha\dot{\alpha}}V^\mu$. The object $V_{\alpha\dot{\alpha}}$ transforms as the $(1/2, 1/2)$ representation. Since the mapping was invertible, this shows that the $(1/2, 1/2)$ representation is isomorphic to the 4-vector representation.

3. Dec 9, 2013

### K^2

I'm just trying to fill in the gaps and my Group Theory is rusty. Does non-compactness follow from the fact that there can be chosen a sequence $\{a_n\}$, such as one diverging to $\pm \infty$, so that $\displaystyle \lim_n \exp(i a_n L_i)$ does not belong to $SO(3, 1)$?

4. Dec 9, 2013

### George Jones

Staff Emeritus
Matrices are subsets of $\mathbb{R}^N$ for some appropriate $N$. Consequently, by the Heine-Borle theorem, compactness is equivalent to closed and bounded.

Consider the the subgroup of the Lorentz group $SO(3,1)$ that consists of boosts along a fixed axis (say, the x-axis) in a particular inertial reference frame. This group is homeomorphic to $\mathbb{R}$, which is neither closed nor bounded.

5. Dec 9, 2013

### Bill_K

For a continuous group G, one defines a volume element in the group space, dv = dξ12... dξn, which is invariant under group operations. Then V = ∫G dv is the total volume, and G is compact or noncompact depending on whether V is finite or infinite.

6. Dec 9, 2013

### K^2

But can I simply say that SO(3, 1) does not contain its limit points, and therefore, by definition, not compact? Or is there a nuance I'm missing?

7. Dec 9, 2013

### George Jones

Staff Emeritus
Yes, but see a more precise and explicit formulation below.

Limit point compactness and (actual) compactness are equivalent for second countable Hausdorff spaces, which is the situation here.

$A \subset X$ is limit point compact if every infinite subset of $A$ has a limit point. Negating this gives $A \subset X$ is not limit point compact if there exists an infinite subset of $A$ that does not have a limit point.

Take $A=SO(3,1)$ and $X=\mathbb{R}^{16}$ (4x4 matrices). It's not that $A$ doesn't contain all its limit points, it's that there are infinite subsets of $A$ that don't have any limits points.

I thought you (K^2) might want to see this directly for $A$. My example illustrates this. More explicitly, consider the infinite subset of $A$ that consists of matrices of the form

$$\begin{pmatrix} \cosh n & \sinh n & 0 & 0\\ \sinh n & \cosh n & 0 & 0\\ 0 & 0 & 1 & 0\\ 0 & 0 & 0 & 1 \end{pmatrix}$$

for all integers $n$. There exist neighbourhoods of each such matrix that don't contain any other matrices of the same form. Thus $A$ does not *any* limits points (even ones that don't "live in" in $A$).

Consequently, $A = SO(3,1)$ is not compact.

Last edited: Dec 9, 2013
8. Dec 9, 2013

### Bill_K

Yes, the only caveat is that you must use the group invariant topology.

9. Dec 13, 2013

### lpetrich

An invariant integral over group parameters will include a multiplier that's a function of the parameters. It can be shown that for a group and set of parameters for it, this multiplier is unique to within overall multiplication. That multiplier is called the Haar measure.

The groups SO(p+q) and SO(p,q) are isomorphic for complex parameters, though not for real parameters. To turn SO(p+q) into SO(p,q), turn the parameters into suitable complex combinations of them.

One can find the tensor combinations of vectors and spinors by using the theory of Lie-algebra representations. This is a generalization of the quantum-mechanical theory of angular momentum. I can go into more detail for anyone who might be interested, like work out 2-tensors.

10. Dec 26, 2013

### lpetrich

A representation can be (1) some matrices that realize the group elements and (2) a basis set for those matrices. For Lie algebras, like U(1), SU(n), and SO(n), it's often much easier to use the basis sets than the matrices, and most of what one usually wants to know one can learn from the basis sets. In field theories, the structure of a field comes from basis sets of representations or reps of the symmetry groups that the field is subject to. Groups like space-time symmetries and gauge symmetries.

Of special interest are irreducible representations or irreps. Their matrices cannot be reduced to every one having the same block-diagonal form. If it can, then each block can be extracted to form a new set of rep matrices, thus making the original rep reducible.

Let's now look at the irreps of various algebras.

U(1) is easy. It has one operator L, and each irrep X(h) has an eigenvalue h which can have an arbitrary value:

L.X(h) = h*X(h)

Its rep matrices: {{exp(i*h*a)}} for parameter a.

SU(n) is more complicated, but its basic ideas are fairly easy. It has a "fundamental representation", a n-vector, an object with one index that varies from 1 to n. Additional reps are formed as tensors, objects with several indices. Irreps have various symmetries. 2-tensor irreps are symmetric and antisymmetric, and 3-tensor irreps and higher have not only symmetric and antisymmetric ones, but also mixed-symmetry ones.

There is a nice graphical technique for working with SU(n) reps: Young diagrams. There is even a Young-diagram technique for finding products of reps: the Littlewood-Richardson rule.

SO(n) is even worse. The vector and tensor reps carry over from SU(n) with some complications, and there are also spinor reps and various combinations of tensors and spinors (a vector is a 1-tensor).

For tensors, one has the complication that tensors δij (the Kronecker delta or identity matrix) and εi1,i2,...,in (the antisymmetric symbol) are invariant in SO(n) making them effectively scalars (singlets). Thus, a symmetric 2-tensor breaks down into a symmetric traceless 2-tensor and a scalar. For SO(2n), an antisymmetric n-tensor breaks down into two parts, a self-dual one and an anti-self-dual one, depending on the sign one gets when one multiplies it by the antisymmetric symbol.

I don't know of any Young-diagram-based technique for handling the effects of these SO(n) invariants.

Spinors are rather complicated. For SO(2n+1), there is one spinor irrep, with dimension 2n. But in SO(2n), it breaks into two irreps, both with dimension 2n-1.

11. Dec 26, 2013

### lpetrich

Let's now look at the lowest SO(n)'s.

SO(2) is isomorphic to U(1). Its vector rep is reducible: (1) + (-1). It has two spinor reps: (1/2) and (-1/2).

Looking at 2-tensors, its symmetric traceless tensor is (2) + (-2), its symmetric-tensor K-delta is (0), and its antisymmetric tensor is a scalar: (0).

For SO(3) and higher, the vector and the symmetric traceless tensor are all irreducible.

SO(3) is isomorphic to SU(2), and it's also the algebra of quantum-mechanical angular momentum. A scalar has angular momentum j = 0, a vector j = 1, a symmetric traceless 2-tensor j = 2, and a spinor j = 1/2. An antisymmetric 2-tensor is equivalent to a vector (j = 1), by multiplying by the antisymmetric symbol.

SO(4) is isomorphic to SU(2)*SU(2), and we denote its states by a pair of angular momenta (j1,j2). A scalar is (0,0), a vector is (1/2,1/2), the two spinors are (1/2,0) and (0,1/2), and a symmetric traceless 2-tensor is (1,1). An antisymmetric 2-tensor breaks down into self-dual and anti-self-dual parts: (1,0) + (0,1).

For SO(5) and higher, the antisymmetric 2-tensor is irreducible.

It's also interesting to look at how the Riemann tensor breaks down. In SU(n), it's given by the Young diagram
@@
@@
with dimension n2(n2-1)/12

Some simpler ones:
Vector: n
@
Symmetric 2-tensor: n(n+1)/2
@@
Antisymmetric 2-tensor: n(n-1)/2
@
@

For SO(n):

SO(2): (0) -- the Ricci scalar
SO(3): (2) + (0) -- the Ricci tensor (symmetric 2-tensor)
SO(4): (2,0) + (0,2) + (1,1) + (0,0) -- the first two are the Weyl tensor and the last two the Ricci tensor (symmetric 2-tensor)