General Irreducible Representation of Lorentz Group

Click For Summary
SUMMARY

The discussion focuses on the general irreducible representation of the Lorentz group, specifically how Lorentz transformations can be expressed using matrix exponentiation of Lorentz generators. The key formula presented is ##M(\Lambda)=exp(\frac{i}{2}\omega_{\mu\nu}J^{\mu\nu})##, where ##J^{\mu\nu}## are the Lorentz generators. The discussion highlights the decomposition of the Lorentz transformation into independent components ##M^I## and ##M^D##, which correspond to different spins ##j_I## and ##j_D##. The challenge arises in ensuring the dimensions of the matrices align correctly when applied to vectors represented as rectangular matrices.

PREREQUISITES
  • Understanding of Lorentz transformations and their mathematical representation.
  • Familiarity with SU(2) algebra and its application in quantum mechanics.
  • Knowledge of matrix exponentiation and the Baker-Campbell-Hausdorff (BCH) theorem.
  • Basic concepts of representation theory in the context of group theory.
NEXT STEPS
  • Study the application of the Baker-Campbell-Hausdorff theorem in quantum mechanics.
  • Explore the representation theory of the Lorentz group in more detail.
  • Learn about the implications of irreducible representations in particle physics.
  • Investigate the mathematical structure of SU(2) and its role in quantum field theory.
USEFUL FOR

Physicists, mathematicians, and students studying quantum mechanics, representation theory, or the mathematical foundations of particle physics will benefit from this discussion.

CharlieCW
Messages
53
Reaction score
5
This one may seem a bit long but essentially the problem reduces to some matrix calculations. You may skip the background if you're familiar with Lorentz representations.

1. Homework Statement


A Lorentz transformation can be represented by the matrix ##M(\Lambda)=exp(\frac{i}{2}\omega_{\mu\nu}J^{\mu\nu})##, where ##J^{\mu\nu}## are the 6 Lorentz generators which satisfy the Lorentz commutator algebra. From these generators we can express both boosts ##K^i=J^{0i}## and rotations ##J^i=\epsilon^{ijk}J^{jk}/2## (here ##i,j=1,2,3##, while ##\mu,\nu=1,2,3,4##).

In particular, we can form two independent linear combinations:

$$\vec{J_I}=\frac{1}{2}(\vec{J}+i\vec{K}) \ \ \ \vec{J_D}=\frac{1}{2}(\vec{J}-i\vec{K})$$

Which satisfy the SU(2) algebra (i.e. ##[J^i_{I,D},J^j_{I,D}]=i\epsilon^{ijk}J^k_{I,D}/2##), and even commute between themselves.

This is extremedly useful as we can build any Lorentz representation by knowing how to represent SU(2) only. We know this from QM courses, that is, we can build the SU(2) matrices ##\vec{J}^{[j]}## of dimension ##(2j+1)## by giving their spin 0,1/2,1,etc. (i.e, for j=0, ##\vec{J}^{[0]}=1##; for j=1/2, ##\vec{J}^{[1/2]}=\vec{\sigma}/2##; and so on).

From above, we see we can express then the rotation generators as ##\vec{J}=\vec{J_I}+\vec{J_D}##. However, for ##\vec{J_I}## and ##\vec{J_D}## we can have different spins ##j_I## and ##j_D##, in general ##j_I\neq j_D##, so both matrices will have a different dimensions. We can fix this by taking "reducible" representations where the ##\vec{J_I}## have the irreducible blocks ##\vec{J}^{[j_I]}## repeated ##(2j_D+1)## times on its diagonal; similarly for ##\vec{J^{D}}##. Now both matrices of have dimension ##(2j_I+1)(2j_D+1)## and can be written, with ##l=(l_I,l_D)##, as:

$$(\vec{J_I}_{l'l})=\vec{J}^{[j_I]}_{l'_Il_I}\delta_{l'_Dl_D} \ \ \ (\vec{J_D}_{l'l})=\vec{J}^{[j_D]}_{l'_Dl_D}\delta_{l'_Il_I}$$

Therefore, the rotation generators are ##\vec{J}=\vec{J_I}+\vec{J_D}##, so the Lorentz representation ##(j_I,j_D)## of dimension ##(2j_I+1)(2j_D+1)## can be in general reduced with respect to the subgroup of rotations, and includes the total spins ##|j_I-j_D|,...,j_I+j_D## obtained by combining spins ##j_I## and ##j_D##.

Now, after doing all this separation, we can now rebuild the ##M(\Lambda)## Lorentz transformation in terms of ##J_I## and ##J_D## (it's more convenient to write ##\Lambda=exp[i\vec{\theta}\cdot\vec{J}+i\vec{\alpha}\cdot\vec{K}]##).

Question: Show that ##M(\Lambda)## decomposes into a product:

$$M^{l'l}=M^I_{l'_Il_I}M^D_{l'_Dl_D}$$

(I include the next one just for context, as this raises some problems with my results as I'll show later)

After finding the above expresions for the matrices, consider the following:

The vectors ##\phi## on which the matrices ##M## act have components ##\phi_l=\phi(l_I,l_D)##, so they can be thought as rectangular matrices ##(2j_I+1)x(2j_D+1)##. Show that these "vectors" transform as ##\phi\rightarrow M^I\phi (M^D)^T##.

Homework Equations



$$\vec{J_I}=\frac{1}{2}(\vec{J}+i\vec{K}) \ \ \ \vec{J_D}=\frac{1}{2}(\vec{J}-i\vec{K})$$

$$(\vec{J_I}_{l'l})=\vec{J}^{[j_I]}_{l'_Il_I}\delta_{l'_Dl_D} \ \ \ (\vec{J_D}_{l'l})=\vec{J}^{[j_D]}_{l'_Dl_D}\delta_{l'_Il_I}$$

$$\vec{J}=\vec{J_I}+\vec{J_D}$$

$$\vec{K}=i(\vec{J_I}-\vec{J_D})$$

$$e^{AB}=e^Ae^B \ \ \ \ , \ \ \ \ if \ \ \ \ \ [A,B]=0$$

The Attempt at a Solution



Let's now focus on the main question, since the second problem is trivial.

I began by replacing ##\vec{J}=\vec{J_I}+\vec{J_D}## and ##\vec{K}=i(\vec{J_I}-\vec{J_D})## to separate the exponential into two parts:

$$M(\Lambda)=exp[i\vec{\theta}\cdot\vec{J}+i\vec{\alpha}\cdot{K}]=exp[\vec{J_I}\cdot(i\vec{\theta}-\vec{a})+\vec{J_D}\cdot(i\vec{\theta}+\vec{a})]$$

Since by definition ##J_I## and ##J_D## commute, we can indeed separate them (by the BCH theorem):

$$M(\Lambda)=exp[\vec{J_I}\cdot(i\vec{\theta}-\vec{a})]exp[\vec{J_D}\cdot(i\vec{\theta}+\vec{a})]$$

Substituting the definitions of ##J_I## and ##J_D## in terms of ##J^{[j_I]}## and ##J^{[j_D]}##, we get:

$$M(\Lambda)=exp[\vec{J}^{[j_I]}_{l'_Il_I}\delta_{l'_Dl_D}\cdot(i\vec{\theta}-\vec{a})]exp[\vec{J}^{[j_D]}_{l'_Dl_D}\delta_{l'_Il_I}\cdot(i\vec{\theta}+\vec{a})]$$

And defining:

$$M^I_{l'_Il_I}=exp[\vec{J}^{[j_I]}_{l'_Il_I}\delta_{l'_Dl_D}\cdot(i\vec{\theta}-\vec{a})] \ \ \ \ ; \ \ \ \ M^I_{l'_Dl_D}=exp[\vec{J}^{[j_D]}_{l'_Dl_D}\delta_{l'_Il_I}\cdot(i\vec{\theta}+\vec{a})]$$

It seems the problem is complete, and both ##M^I_{l'_Il_I}## and ##M^I_{l'_Dl_D}## are square matrices of dimension ##(2j_I+1)(2j_D+1)##.

However, at a later exercise where I use this matrices, for it to be solvable, I must need that ##M^I_{l'_Il_I}## and ##M^I_{l'_Dl_D}## be instead of dimensions ##(2j_I+1)## and ##(2j_D+1)## respectively! So either my definitions are incorrect and the matrices have the wrong dimensions, or the other exercise is wrong (or I'm interpreting it incorrectly).

In other words, Since I calculated that ##M^I## is a square matrix of size ##(2j_I+1)(2j_D+1)## and ##\phi## is a rectangular matrix of size ##(2j_I+1)x(2j_D+1)## we can't even multiply the first product ##M^I \phi## as the matrix sizes don't even match up.

Do you have any idea, or know if I can find a similar development in another source (I've looked everywhere but only the Weinberg has a brief page with information and only definitions). I appreciate any suggestions.

*Reference: S. Weinberg (1995), The Theory of Quantum Fields Vol. I, p. 229*
 
Physics news on Phys.org
Your equation for the M's is certainly correct, but refers to matrices M operating on a (2jI+1)(2jD+1) dimensional vector. The question is, if you re-arrange the components of the vector into a rectangular matrix, how do you have to modify the matrices M? The obvious answer is that you have to leave out the Kronecker deltas in the exponent.
 

Similar threads

  • · Replies 2 ·
Replies
2
Views
2K
Replies
1
Views
2K
Replies
17
Views
3K
Replies
1
Views
2K
Replies
2
Views
2K
  • · Replies 9 ·
Replies
9
Views
3K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 4 ·
Replies
4
Views
2K
  • · Replies 11 ·
Replies
11
Views
3K
Replies
16
Views
3K