Discussing the mathematical formalism of generators (Lorentz Group)

In summary, the Lorentz group is a set of rotations and boosts that satisfy the Lorentz condition ##\Lambda^T g \Lambda = g##. Representations are the embedding of group elements in operators, usually in the form of matrices. The Lorentz group includes rotations around the ##x,y## and ##z## axes and boosts in the ##x,y## and ##z## directions, which can be represented by specific matrices. The group must be independent of any particular representation, and can be expressed in terms of generators ##J_i## and ##K_i## using the exponential of a sum of these matrices. The composition of two boosts is a boost composed with a rotation, leading to the need to mix the
  • #1
JD_PM
1,131
158
TL;DR Summary
I would like to understand and discuss the mathematical formalism of generators (Lorentz Group)
I learned that the Lorentz group is the set of rotations and boosts that satisfy the Lorentz condition ##\Lambda^T g \Lambda = g##

I recently learned that a representation is the embedding of the group element(s) in operators (usually being matrices).

Examples of Lorentz transformations are rotations around the ##x,y## and ##z## axis and boosts in the ##x,y## and ##z## directions. These matrices are thus a particular representation of the Lorentz group.

I've read that the Lorentz group must be independent of any particular representation. Conceptually I understand this should be the case: the representation is simply taking some elements of the whole group.

However, what is not that clear to me is how to show this mathematically.

The book I am reading goes like this: take infinitesimal (Lorentz) transformations and write them in function of infinitesimal angles ##\theta_i## and ##\beta_i##

$$\delta X_0 = \beta_i X_i \tag{10.11}$$

$$\delta X_i = \beta_i X_0-\epsilon_{ijk} \theta_j X_k \tag{10.12}$$

Where ##\epsilon_{ijk}## is the totally antisymmetric tensor. Equations ##(10.11)## and ##(10.12)## can be put together as follows

$$\delta X_{\mu} = i \Big[ \theta_i (J^i)_{\mu \nu} + \beta_i(K^i)_{\mu \nu} \Big] X_{\nu} \tag{10.13}$$

Where I've used Einstein's summation convention.

53625325.png


The above matrices are the generators of the Lorentz group. They are labelled as such because any element of the Lorentz group can be uniquely written as

$$\Lambda = \exp(i \theta_i J_i + i \beta_i K_i) \tag{10.16}$$

I'd like to understand from where are ##(10.11)## and ##(10.12)## coming from. Once I do, I may be able to get ##(10.16)##. The only certainty for me is the meaning of ##\beta##; it is the Lorentz factor.

Thank you :smile:

Source:

M. D. Schwartz, Quantum field theory and the Standard
Model, Cambridge University Press, Cambridge, New York
(2014).

Section: 10.1.1 Group Theory.
 
  • Informative
Likes etotheipi
Physics news on Phys.org
  • #2
Here's my potted analysis of this. We start with boosts and rotations, which can be expressed in matrix form in the usual way. The rotations form a subgroup, as the composition of any two rotations is just another rotation. But, the composition of two boosts is, in general, a boost composed with a rotation. That's why we need to mix the two.

This can be also be expressed in terms of closed commutation relations: the commutator of any two of the Lorentz matrices is another Lorentz matrix.

We can generate anyone of the rotations or boosts by an exponential of one of the ##J## or ##K## matrices. (I think you might have asked about this before!). You can check that ##\exp(i\theta J_3)## is indeed the matrix for a rotation of ##\theta## about the z-axis etc. So, any rotation or boost can be expressed as the exponential of a ##J## or ##K## matrix.

Now, the Baker-Campbell-Hausdorf formula tells you that the composition of two matrix exponentials is an exponential involving a sum of the matrices and a sequence of their commutators. And, as these commutators are (sums of) rotations and boosts, then every Lorentz Transformation can be expressed in the form of ##(10.16)##. Therefore, ##(10.16)## is an alternative way to represent all Lorentz Transformations - in terms of the generators ##J_i## and ##K_i##.

That process holds generally whenever you have closed commutation relations. That's what allows you to represent the group in terms of an exponential of a sum of generators.

I can't say I recognise or understand ##(10.11), (10.12), (10.13)##. But, perhaps the above explanation is useful.
 
  • Like
Likes JD_PM
  • #3
PeroK said:
But, the composition of two boosts is, in general, a boost composed with a rotation. That's why we need to mix the two.

Oh I did not know about this! Let's try it out. Let's multiply the representation of the boost in the ##x##-direction and the representation of the boost in the ##y##-direction and see what we get

$$
\begin{pmatrix}
\cosh(\beta) & \sinh(\beta) & 0 & 0 \\
\sinh(\beta) & \cosh(\beta) & 0 & 0 \\
0 & 0 & 1 & 0 \\
0 & 0 & 0 & 1
\end{pmatrix}
\begin{pmatrix}
\cosh(\beta) & 0 & \sinh(\beta) & 0 \\
0 & 1 & 0 & 0 \\
\sinh(\beta) & 0 & \cosh(\beta) & 0 \\
0 & 0 & 0 & 1
\end{pmatrix}=
\begin{pmatrix}
\cosh^2(\beta) & \sinh(\beta) & \cosh(\beta)\sinh(\beta) & 0 \\
\sinh(\beta) \cosh(\beta) & \cosh(\beta) & \sinh^2(\beta) & 0 \\
\sinh(\beta) & 0 & \cosh(\beta) & 0 \\
0 & 0 & 0 & 1
\end{pmatrix}
$$

Mmm the resultant matrix corresponds to none of the rotation matrices I am aware of (i.e. ##x,y,z## rotation matrices basically). I must be missing something here.

PeroK said:
This can be also be expressed in terms of closed commutation relations: the commutator of any two of the Lorentz matrices is another Lorentz matrix.

OK I understand this. This is because one of the conditions to be a group is this: it must be closed under multiplication (I recall now we checked the group criterion months ago ;)).

PeroK said:
We can generate anyone of the rotations or boosts by an exponential of one of the ##J## or ##K## matrices. (I think you might have asked about this before!).

My bad, I've asked this before indeed o_O. The point is that I know how to use it but I do not understand why it works. I am using a beautiful machine that I'd like to learn how to construct. Would you like to put a really simple example/explain how to build it from scratch?

PeroK said:
Now, the Baker-Campbell-Hausdorf formula tells you that the composition of two matrix exponentials is an exponential involving a sum of the matrices and a sequence of their commutators. And, as these commutators are (sums of) rotations and boosts, then every Lorentz Transformation can be expressed in the form of ##(10.16)##. Therefore, ##(10.16)## is an alternative way to represent all Lorentz Transformations - in terms of the generators ##J_i## and ##K_i##.

That process holds generally whenever you have closed commutation relations. That's what allows you to represent the group in terms of an exponential of a sum of generators.

I can't say I recognise or understand ##(10.11), (10.12), (10.13)##. But, perhaps the above explanation is useful.

I better study right now Baker-Campbell-Hausdorf formula before replying.
 
  • #4
Composing two boosts in general gives a boost composed with a rotation; not just a rotation. This is sometimes called the Wigner rotation.
 
  • Like
Likes JD_PM
  • #5
JD_PM said:
Let's multiply the representation of the boost in the ##x##-direction and the representation of the boost in the ##y##-direction

Try computing their commutator, i.e., ##x y - y x##. The composition you've computed is, as @PeroK says, a boost plus a rotation, but taking the commutator has the effect of subtracting out the "boost" part of the composition, leaving only the rotation. (Something you might want to think about before doing the computation: what axis would you expect this rotation to be about?)

The general term for the rotation that you end up with by computing the commutator of two boosts is "Thomas precession".
 
  • Like
Likes PeroK and JD_PM
  • #6
PeroK said:
Composing two boosts in general gives a boost composed with a rotation; not just a rotation. This is sometimes called the Wigner rotation.

Ahhh my bad, I did not read carefully enough! Identifying that the resultant matrix is the product of a boost and rotation is something I do not see straight away though.
 
  • #7
PeterDonis said:
Try computing their commutator, i.e., ##x y - y x##. The composition you've computed is, as @PeroK says, a boost plus a rotation, but taking the commutator has the effect of subtracting out the "boost" part of the composition, leaving only the rotation. (Something you might want to think about before doing the computation: what axis would you expect this rotation to be about?)

Hi PeterDonis.

This is simply a guess but I'd say the ##z##-axis. This is because the commutation relation expression you present is precisely the one we get out of a system where the angular momentum about the ##z##-axis is conserved.
 
  • #8
JD_PM said:
This is simply a guess but I'd say the ##z##-axis.

Right!

Perhaps that might make it easier to do the actual computation. I think it was John Wheeler who said that whenever you are doing a calculation in physics, it helps to know the answer in advance. :wink:
 
  • Love
Likes JD_PM
  • #9
PeterDonis said:
Right!

Perhaps that might make it easier to do the actual computation. I think it was John Wheeler who said that whenever you are doing a calculation in physics, it helps to know the answer in advance. :wink:

I completely agree! It's better to first guess! I'd say the quote belongs to the great Richard Feynman though! :cool:

Now time for me to compute it explicitly.
 
  • #10
Let's multiply the representation of the boost in the ##y##-direction and the representation of the boost in the ##x##-direction and see what we get

$$
\begin{pmatrix}

\cosh(\beta) & 0 & \sinh(\beta) & 0 \\

0 & 1 & 0 & 0 \\

\sinh(\beta) & 0 & \cosh(\beta) & 0 \\

0 & 0 & 0 & 1

\end{pmatrix}

\begin{pmatrix}

\cosh(\beta) & \sinh(\beta) & 0 & 0 \\

\sinh(\beta) & \cosh(\beta) & 0 & 0 \\

0 & 0 & 1 & 0 \\

0 & 0 & 0 & 1

\end{pmatrix}=\begin{pmatrix}

\cosh^2(\beta) & \cosh(\beta)\sinh(\beta) & \sinh(\beta) & 0 \\

\sinh(\beta) & \cosh(\beta) & 0 & 0 \\

\sinh(\beta)\cosh(\beta) & \sinh^2(\beta) & \cosh(\beta) & 0 \\

0 & 0 & 0 & 1

\end{pmatrix}
$$

Thus ##[x,y]## yields

$$
\begin{pmatrix}

0 & \sinh(\beta)-\cosh(\beta)\sinh(\beta) & \cosh(\beta)\sinh(\beta) - \sinh(\beta) & 0 \\

\cosh(\beta)\sinh(\beta) - \sinh(\beta) & 0 & \sinh^2(\beta) & 0 \\

\sinh(\beta)-\cosh(\beta)\sinh(\beta) & -\sinh^2(\beta) & 0 & 0 \\

0 & 0 & 0 & 0

\end{pmatrix}
$$

Which, of course, must be equal to the z-rotation matrix

$$
\begin{pmatrix}

1 & 0 & 0 & 0 \\

0 & \cos(\theta) & \sin(\theta) & 0 \\

0 & -\sin(\theta) & \cos(\theta) & 0 \\

0 & 0 & 0 & 1

\end{pmatrix}
$$

I am trying to show this based on the hyperbolic trig. relations ##sinh⁡(x)=(e^x−e^{−x})/2,cosh⁡(x)=(e^x+e^{x})/2##. But I am not getting it. Is this the right approach though? I know this may be tedious for you all.
 
Last edited:
  • #11
PeroK said:
Now, the Baker-Campbell-Hausdorf formula tells you that the composition of two matrix exponentials is an exponential involving a sum of the matrices and a sequence of their commutators. And, as these commutators are (sums of) rotations and boosts, then every Lorentz Transformation can be expressed in the form of ##(10.16)##. Therefore, ##(10.16)## is an alternative way to represent all Lorentz Transformations - in terms of the generators ##J_i## and ##K_i##.

That process holds generally whenever you have closed commutation relations. That's what allows you to represent the group in terms of an exponential of a sum of generators.

Oh I see, so in this case we have (let's drop ##\theta## and ##\beta## for now)

$$\exp(J)\exp(K) = \exp(Z)$$

Where

$$Z = J + K + \frac 1 2 [J, K]+ \frac{1}{12} [J, [J,K]]-\frac{1}{12} [K, [J,K]] + ...$$

This is going to help me understand ##(10.16)##, thank you @PeroK !

PeroK said:
I can't say I recognise or understand ##(10.11), (10.12), (10.13)##. But, perhaps the above explanation is useful.

It definitely is! Let's see if someone would like to shed some light on ##(10.11), (10.12), (10.13)## :smile:
 
  • #12
JD_PM said:
Let's multiply the representation of the boost in the ##x##-direction and the representation of the boost in the ##y##-direction

You need to compute the commutator of the generators, not the commutator of arbitrary boost matrices. Sorry, I should have made that clearer before. In other words, you need to compute ##[K_1, K_2]##. You should find that it is equal to ##i J_3##.
 
  • Like
Likes JD_PM
  • #13
JD_PM said:
Let's see if someone would like to shed some light on ##(10.11), (10.12), (10.13)## :smile:
Write out the Lorentz boost transformation as a transformation of the ##X^0## and ##X^i## coordinates. (I.e., treat ##X^0## as time ##t##, and just focus on a boost in the particular spatial direction ##x##. You should get a formula like ##t'(\beta) = ?? t + ??x## and similarly for ##x'##, where ##\beta## is the boost parameter and the "??" are functions of ##\beta##.

Then consider the case of infinitesimal ##\beta##. I.e., what do the transformation equations look like if we discard terms of order ##O(\beta^2)## and higher?

Hint: express the transformation equations as 2 Taylor series in ##\beta##, but only keep the terms up to ##O(\beta)##.

Then do a similar exercise for a rotation with parameter ##\theta##.
 
  • Like
Likes JD_PM
  • #14
PeterDonis said:
You need to compute the commutator of the generators, not the commutator of arbitrary boost matrices. Sorry, I should have made that clearer before. In other words, you need to compute ##[K_1, K_2]##. You should find that it is equal to ##i J_3##.

Alright, I see it now thanks! Let's work it out (I'd say you missed the -ive sign :) )

$$
[K_1, K_2] =
\begin{pmatrix}
0 & 0 & 0 & 0 \\
0 & 0 & -1 & 0 \\
0 & 1 & 0 & 0 \\
0 & 0 & 0 & 0
\end{pmatrix}=-i J_3
$$

Let's test this commutator further. What if we compute ##[K_2, K_1]##? We expect to get the same answer but with opposite sign. And that is indeed the case; ##[K_2, K_1]=i J_3##. OK but what if we now want to compute (for instance) ##[K_3, K_1]##. Here we have ##zx-xz## so we could guess that we're going to get the rotation representation in the ##y##-direction. And that is indeed what we get ##[K_3, K_1]=-i J_2##. The fact that this commutator has the property ##[a,b]=-[b,a]## suggest we should introduce the totally antisymmetric tensor ##\epsilon_{ijk}## in our most general result (which is in this beautiful book!).

$$[K_i, K_j] = -i \epsilon_{ijk} J_k \tag{10.17}$$

I do not know why #14 cracked. Please see #16
 
Last edited:
  • #15
Hi strangerep! This discussion is motivated after getting stuck here. (I appreciate your patience! 😅)

strangerep said:
Write out the Lorentz boost transformation as a transformation of the ##X^0## and ##X^i## coordinates. (I.e., treat ##X^0## as time ##t##, and just focus on a boost in the particular spatial direction ##x##. You should get a formula like ##t'(\beta) = ?? t + ??x## and similarly for ##x'##, where ##\beta## is the boost parameter and the "??" are functions of ##\beta##.

OK. You're asking me to write the Lorentz Transformations in the ##x##-direction (sorry if this is not the standard notation).

$$X'^0 = \beta X^0+\gamma \beta X$$

$$X'^1=\beta X^1 + \gamma \beta X^0$$

$$X'^2=X^2$$

$$X'^3=X^3$$

Where ##\beta## is the Lorentz factor and ##\gamma=v/c##.

The matrix representation of the Lorentz transformation is

$$
\Lambda =
\begin{pmatrix}
\beta & \beta \gamma & 0 & 0 \\
\beta \gamma & \beta & 0 & 0 \\
0 & 0 & 1 & 0 \\
0 & 0 & 0 & 1
\end{pmatrix}
$$

strangerep said:
Then consider the case of infinitesimal ##\beta##. I.e., what do the transformation equations look like if we discard terms of order ##O(\beta^2)## and higher?

Hint: express the transformation equations as 2 Taylor series in ##\beta##, but only keep the terms up to ##O(\beta)##.

Mmm why two Taylor series expansions? I'd say that we only need the series expansion of the exponential. Dropping terms of order ##O(\beta^2)## and higher we get

$$\Lambda^{\mu}_{ \ \ \nu}(\epsilon)=(\exp(\epsilon))^{\mu}_{ \ \ \nu} = \delta^{\mu}_{\nu} + \epsilon^{\mu}_{ \ \ \nu}$$

EDIT: Of course the exponential-Taylor series equals the sum of the sinus-Taylor expansion and the cosine-Taylor expansion. Yikes! 😅 . However, I still do not see how this leads to
##(10.11), (10.12), (10.13)##. Working on it.
 
  • #16
#message 14

PeterDonis said:
You need to compute the commutator of the generators, not the commutator of arbitrary boost matrices. Sorry, I should have made that clearer before. In other words, you need to compute ##[K_1, K_2]##. You should find that it is equal to ##i J_3##.

Alright, I see it now thanks! Let's work it out (I'd say you missed the -ive sign :) )

$$
[K_1, K_2] =
\begin{pmatrix}
0 & 0 & 0 & 0 \\
0 & 0 & -1 & 0 \\
0 & 1 & 0 & 0 \\
0 & 0 & 0 & 0
\end{pmatrix}=-i J_3
$$

Let's test this commutator further. What if we compute ##[K_2, K_1]##? We expect to get the same answer but with opposite sign. And that is indeed the case; ##[K_2, K_1]=i J_3##. OK but what if we now want to compute (for instance) ##[K_3, K_1]##. Here we have ##zx-xz## so we could guess that we're going to get the rotation representation in the ##y##-direction. And that is indeed what we get ##[K_3, K_1]=-i J_2##. The fact that this commutator has the property ##[a,b]=-[b,a]## suggest we should introduce the totally antisymmetric tensor ##\epsilon_{ijk}## in our most general result (which is in this beautiful book!).

$$[K_i, K_j] = -i \epsilon_{ijk} J_k \tag{10.17}$$
 
  • #17
OK. I understand that ##(10.11)## applies to boosts and ##(10.12)## to rotations.

What I am trying to show (let's use ##\beta## instead of ##\epsilon##) now is that, out of ##\Lambda^{\mu}_{ \ \ \nu}(\beta)=(\exp(\beta))^{\mu}_{ \ \ \nu} = \delta^{\mu}_{\nu} + \beta^{\mu}_{ \ \ \nu}##, we can get ##(10.11)## and ##(10.12)##. That is (I think) what @strangerep suggested.

We're going to work with

$$X'^{\mu} = \Lambda^{\mu}_{ \ \ \nu}(\beta) X^{\nu}$$

In matrix form that is

$$
\begin{pmatrix}
X'^0 \\
X'^1 \\
X'^2 \\
X'^3
\end{pmatrix}=
\begin{pmatrix}
\beta & \beta \gamma & 0 & 0 \\
\beta \gamma & \beta & 0 & 0 \\
0 & 0 & 1 & 0 \\
0 & 0 & 0 & 1
\end{pmatrix}
\begin{pmatrix}
X^0 \\
X^1 \\
X^2 \\
X^3
\end{pmatrix}
$$

Let's deal with boosts. When ##\mu=0, \nu=1## I get ##X'^{0} = \Lambda^{0}_{ \ \ 1}(\beta) X^{1}=\beta^0_1 X^{1}=\gamma \beta X^{1}##. So far so good, as it matches ##(10.11)##. When ##\mu=0, \nu=2## I get ##X'^{0} = \Lambda^{0}_{ \ \ 2}(\beta) X^{2}=\beta^0_2 X^{2}=0##. Oh yeah, this looks good. When ##\mu=0, \nu=3## I get ##X'^{0} = \Lambda^{0}_{ \ \ 3}(\beta) X^{3}=\beta^0_3 X^{3}=0## (still thinking why getting zero twice should not bother us). We've just disentangled ##(10.11)## 😍.

Working on ##(10.12)## now 🤨

Regarding rotations I get ##X'^{1} = \Lambda^{1}_{ \ \ 0}(\beta) X^{0}=\gamma \beta X^{0}, X'^{2} = \Lambda^{2}_{ \ \ 0}(\beta) X^{0}=0, X'^{3} = \Lambda^{3}_{ \ \ 0}(\beta) X^{0}=0##. OK I am clearly missing a piece of the puzzle here: ##\theta X##.
 
Last edited:
  • Like
Likes vanhees71
  • #18
What you get is the Lie algebra of the proper orthochronous Lorentz group. You do not need to do the cumbersome work using matrices. The Ricci calculus will just do. To that end look at a infinitesimal general Lorentz transformation. The corresponding matrix is
$${\Lambda^{\mu}}_{\nu} = \delta_{\nu}^{\mu} + {\delta \omega^{\mu}}_{\nu}.$$
The condition that this is a Lorentz transformation is
$$\eta_{\mu \rho} {\Lambda^{\mu}}_{\nu} {\Lambda^{\rho}}_{\sigma}=\eta_{\nu \sigma}.$$
Plugging in the infinitesimal transformation you get the condition to 1st order in the ##\delta \omega##:
$$\delta \omega_{\mu \nu}=-\delta \omega_{\nu \mu}.$$
The three independent elements ##\mu=0##, ##\nu=j \in \{1,2,3 \}## are obviously infinitesimal "Minkowski rotations" in the ##0 \nu## plane, i.e., boosts and the three independent elements ##\delta \omega_{ij}## with ##i,j \in \{1,2,3 \}## are infinitesimal rotations in the ##ij## plane.

With that you get in a straight-forward way the three generators ##\vec{J}## and ##\vec{K}## (the additional factors ##\mathrm{i}## are simply convention such that the ##\vec{J}## are hermitean matrices rather than antihermitean ones; that's nice in quantum theory, because then you have self-adjoint operators for the corresponding rotations, which form a compact (!) subgroup of the proper orthochronous Lorentz group and as such all irreps. are equivalent to a unitary one. Note, however, that the ##\vec{K}## are anti-hermitean, and thus you don't get a unitary representation of the entire Lorentz group in this way, which cannot be achieved (except for the trivial representation), because the Lorentz group is not compact. Nevertheless you get a Lie algebra:
$$[J_j,J_k]=\mathrm{i} \epsilon_{jkl} J_l, \quad [K_j,K_k]=-\mathrm{i} \epsilon_{jkl} J_l, \quad [J_j,K_k]=\mathrm{i} \epsilon_{jkl} K_l.$$
This shows that indeed the rotation algebra (spanned by the generators ##\vec{J}##) is a subalgebra of the Lie algebra, as expected and that ##\vec{K}## is a vector operator wrt. rotations.

The trick to find all irreps. of the proper orthochronous (or more completely for QT its covering group ##\mathrm{SL}(2,\mathbb{C})## is to recognize, that the generators
$$\vec{A}=\frac{1}{2}(\vec{J}+\mathrm{i} \vec{K}), \quad \vec{B}=\frac{1}{2}(\vec{J}-\mathrm{i} \vec{K})$$
show that this Lie algebra is equivalent to ##\mathrm{so}(3) \oplus \mathrm{so}(3)=\mathrm{su}(2) \oplus \mathrm{su}(2)##, because
$$[A_j,A_k]=\mathrm{i} \epsilon_{jkl} A_l, \quad [B_j,B_k]=\mathrm{i} \epsilon_{jkl} B_l, \quad [A_j,B_k]=0.$$
From this you get all irreps. of ##\mathrm{sl}(2, \mathbb{C})## by the irreps. of ##\mathrm{su}(2)##, i.e., the spinor representations with ##s_A,s_B \in \{0,1/2,1,3/2,\ldots \}## known from the angular-momentum representations treated in quantum mechanics.

For further details, see Appendix B in

https://itp.uni-frankfurt.de/~hees/publ/lect.pdf
 
  • Like
Likes JD_PM and PeroK
  • #19
There's a pretty enlightening discussion of this (for me, at least) in Physics from Symmetry . YMMV.
 
  • #20
JD_PM said:
(I'd say you missed the -ive sign :) )

Quite possibly, yes. Or I might have been thinking of a different sign convention than the one you and the book you reference are using. Differences in sign conventions and factors of ##i## abound in this domain.
 
  • Like
Likes vanhees71
  • #21
Hi vanhees71

vanhees71 said:
What you get is the Lie algebra of the proper orthochronous Lorentz group. You do not need to do the cumbersome work using matrices. The Ricci calculus will just do. To that end look at a infinitesimal general Lorentz transformation. The corresponding matrix is
$${\Lambda^{\mu}}_{\nu} = \delta_{\nu}^{\mu} + {\delta \omega^{\mu}}_{\nu}.$$

Ahhh thanks for this! I've just recalled I studied the Ricci-calculus approach from Tong's beautiful notes : https://www.damtp.cam.ac.uk/user/tong/qft/four.pdf !

The starting point, as you said, is writing the series expansion of the Lorentz Transformation (allow me to use ##\epsilon## to indicate infinitesimal :wink:)

$$\Lambda^{\mu}_{ \ \ \nu}(\beta)=(\exp(\beta))^{\mu}_{ \ \ \nu} = \delta^{\mu}_{\nu} +\epsilon \beta^{\mu}_{ \ \ \nu}$$

vanhees71 said:
The condition that this is a Lorentz transformation is
$$\eta_{\mu \rho} {\Lambda^{\mu}}_{\nu} {\Lambda^{\rho}}_{\sigma}=\eta_{\nu \sigma}.$$
Plugging in the infinitesimal transformation you get the condition to 1st order in the ##\delta \omega##:
$$\delta \omega_{\mu \nu}=-\delta \omega_{\nu \mu}.$$

Mmm this looks like a tasty snack!

Proof:

Using the Lorentz condition ##{\Lambda^{\mu}}_{\sigma} {\Lambda^{\nu}}_{\rho}\eta^{\sigma \rho}=\eta^{\mu \nu} ## we get

$$(\delta^{\mu}_{\sigma} +\epsilon \beta^{\mu}_{ \ \ \sigma})(\delta^{\nu}_{\rho} +\epsilon \beta^{\nu}_{ \ \ \rho})\eta^{\sigma \rho}$$$$=\delta^{\mu}_{\sigma}\delta^{\nu}_{\rho}\eta^{\sigma \rho} +\epsilon^2 \beta^{\mu}_{ \ \ \sigma} \beta^{\nu}_{ \ \ \rho}\eta^{\sigma \rho}+\epsilon \delta^{\mu}_{\sigma}\beta^{\nu}_{ \ \ \rho}\eta^{\sigma \rho} +\epsilon \delta^{\nu}_{\rho}\beta^{\mu}_{ \ \ \sigma}\eta^{\sigma \rho}$$

We discard the second-order term to get

$$\delta^{\mu}_{\sigma}\delta^{\nu}_{\rho}\eta^{\sigma \rho}+\epsilon(\delta^{\mu}_{\sigma}\beta^{\nu}_{ \ \ \rho}\eta^{\sigma \rho} + \delta^{\nu}_{\rho}\beta^{\mu}_{ \ \ \sigma}\eta^{\sigma \rho})=\eta^{\mu \nu} \Rightarrow $$ $$\Rightarrow \eta^{\mu \nu} + \epsilon(\beta^{\mu \nu}+\beta^{\nu \mu})=\eta^{\mu \nu} \Rightarrow$$

##\epsilon \neq 0## so

$$\Rightarrow \beta^{\mu \nu}=-\beta^{\nu \mu}$$

QED 😍
 
  • Like
Likes vanhees71
  • #22
Ahhh I see your point. As ##\beta## is antisymmetric, it is going to have precisely 6 independent components (which I indicate by *)

\begin{pmatrix}
0 & * & * & * \\
.. & 0 & * & * \\
.. & .. & 0 & * \\
.. & .. & .. & 0

\end{pmatrix}

These 6 independent components = 3 rotations + 3 boosts.

I agree this is much simpler and much less tedious.

However, I'd like to get the rotations through the matrix method :biggrin: Still missing the puzzle piece...🤨

vanhees71 said:
The trick to find all irreps. of the proper orthochronous (or more completely for QT its covering group ##\mathrm{SL}(2,\mathbb{C})## is to recognize, that the generators
$$\vec{A}=\frac{1}{2}(\vec{J}+\mathrm{i} \vec{K}), \quad \vec{B}=\frac{1}{2}(\vec{J}-\mathrm{i} \vec{K})$$
show that this Lie algebra is equivalent to ##\mathrm{so}(3) \oplus \mathrm{so}(3)=\mathrm{su}(2) \oplus \mathrm{su}(2)##, because
$$[A_j,A_k]=\mathrm{i} \epsilon_{jkl} A_l, \quad [B_j,B_k]=\mathrm{i} \epsilon_{jkl} B_l, \quad [A_j,B_k]=0.$$
From this you get all irreps. of ##\mathrm{sl}(2, \mathbb{C})## by the irreps. of ##\mathrm{su}(2)##, i.e., the spinor representations with ##s_A,s_B \in \{0,1/2,1,3/2,\ldots \}## known from the angular-momentum representations treated in quantum mechanics.

Uups I'll need more reading to understand this (I am 3/4 pages away).
 
  • #23
JD_PM said:
OK. You're asking me to write the Lorentz Transformations in the ##x##-direction (sorry if this is not the standard notation).
$$X'^0 = \beta X^0+\gamma \beta X $$$$X'^1=\beta X^1 + \gamma \beta X^0$$$$X'^2=X^2$$$$X'^3=X^3$$ Where ##\beta## is the Lorentz factor and ##\gamma=v/c##.
[...]
It's more common to swap the meanings of ##\beta## and ##\gamma##.

Mmm why two Taylor series expansions? I'd say that we only need the series expansion of the exponential.
You're thinking of a Taylor expansion of the matrix. Try thinking of your previous coordinate equations as [with ##\beta := v/c## and ##\gamma := (1-\beta^2)^{-1/2}##]: $$X'^0(\beta) ~=~~ \gamma X^0+\gamma \beta X ~,~~~~~~~~
X'^1(\beta) ~=~ \gamma X^1 + \gamma \beta X^0 ~.$$ When I said "2 Taylor series", I meant one to expand ##X'^0(\beta)## and the other to expand ##X'^1(\beta)##. (Both expansions are around ##\beta=0##).
 
  • #24
JD_PM said:
Summary:: I would like to understand and discuss the mathematical formalism of generators (Lorentz Group)
[tex]\delta X_0 = \beta_i X_i \tag{10.11}[/tex]
[tex]\delta X_i = \beta_i X_0-\epsilon_{ijk} \theta_j X_k \tag{10.12}[/tex]
[tex]\bar{X}^{\rho} = \Lambda^{\rho}{}_{\sigma}X^{\sigma}.[/tex] Infinitesimally, [tex]\bar{X}^{\rho} = \left( \delta^{\rho}_{\sigma} + \omega^{\rho}{}_{\sigma}\right) X^{\sigma}.[/tex] Thus [tex]\delta X^{\rho} \equiv \bar{X}^{\rho} - X^{\rho} = \omega^{\rho}{}_{\sigma} \ X^{\sigma} .[/tex] Rewrite that as [tex]\delta X^{\rho} = \omega^{\rho \nu} \ \eta_{\nu \sigma} \ X^{\sigma} . \ \ \ \ \ \ \ \ \ \ \ \ \ \ (1)[/tex] Now, if you take [itex]\rho = 0[/itex] and note that [itex]\omega^{00} = 0[/itex], you get [tex]\delta X^{0} = \omega^{0 i} \left( \eta_{i 0} X^{0} + \eta_{ij} X^{j}\right) .[/tex] Using [itex]\eta_{i 0} = 0[/itex] and [itex]\eta_{ij} = - \delta_{ij}[/itex], you get [tex]\delta X^{0} = - \omega^{0 i} X^{i} = \omega^{i 0} X^{i} .[/tex] Now (if you like) define [itex]\beta^{i} \equiv \omega^{i 0}[/itex], and rewrite [tex]\delta X^{0} = \beta^{i}X^{i} = \vec{\beta} \cdot \vec{X} .[/tex] Now, go back to (1) and put [itex]\rho = i[/itex]. If you repeat what I have done, you should get [tex]\delta X^{i} = \beta^{i} X^{0} - \omega^{ij} X^{j} .[/tex] Now, if you define the rotation angle, in the (ij)-plane, by [itex]\omega^{ij} = \epsilon^{ijk} \theta^{k}[/itex], you get [tex]\delta X^{i} = \beta^{i} X^{0} + \epsilon^{ikj} \theta^{k} X^{j} ,[/tex] or (in 3-vector notations) [tex]\delta \vec{X} = \vec{\beta} X^{0} + \vec{\theta} \times \vec{X} .[/tex]

To obtain the generator matrices (in this vector representation), go back to (1) and rewrite it as [tex]\delta X^{\rho} = \omega^{\mu \nu} \ \delta^{\rho}_{\mu} \ \eta_{\nu \sigma} \ X^{\sigma} .[/tex] Using, [itex]\omega^{\mu\nu} = - \omega^{\nu\mu}[/itex], you find [tex]\delta X^{\rho} = \frac{1}{2} \omega^{\mu\nu} \left( \delta^{\rho}_{\mu} \eta_{\nu \sigma} - \delta^{\rho}_{\nu} \eta_{\mu \sigma} \right) X^{\sigma} ,[/tex] or [tex]\delta X^{\rho} = - \frac{i}{2} \omega^{\mu\nu} \left( J_{\mu\nu}\right)^{\rho}{}_{\sigma} X^{\sigma},[/tex] where [tex]\left( J_{\mu\nu}\right)^{\rho}{}_{\sigma} = i

\left( \delta^{\rho}_{\mu} \ \eta_{\nu \sigma} - \delta^{\rho}_{\nu} \ \eta_{\mu \sigma}\right) .[/tex] I leave you to obtain the 3 boosts (generator) matrices [itex]J_{01} \equiv K_{1}, \ J_{02} \equiv K_{2}, \ J_{03} \equiv K_{3}[/itex] and the 3 rotation (generator) matrices [itex]J_{12} \equiv J_{3}, \ J_{31} \equiv J_{2}, \ J_{23} \equiv J_{1}[/itex].
 
Last edited:
  • Like
  • Love
Likes JD_PM and vanhees71
  • #25
Hi samalkhaiat

samalkhaiat said:
Now, if you take ##\rho = 0## and note that ##\omega^{00} = 0##

Mmm why ##\omega^{00} = 0##? From the matrix form I got at #17 I get ##\omega^{00} = \beta##. It is highly likely I am making a naive mistake though.
samalkhaiat said:
Now, go back to (1) and put ##\rho = i##. If you repeat what I have done, you should get ##\delta X^{i} = \beta^{i} X^{0} - \omega^{ij} X^{j}##

OK, I see you use the same Minkowski metric convention as Peskin & Schroeder do i.e.

$$
\eta^{\mu \nu} =
\begin{pmatrix}
1 & 0 & 0 & 0 \\
0 & -1 & 0 & 0 \\
0 & 0 & -1 & 0 \\
0 & 0 & 0 & -1
\end{pmatrix}
$$

Following your steps one gets (where ##i,j=1,2,3##)

$$\delta X^{i} = \omega^{i \nu} \ \eta_{\nu \sigma} \ X^{\sigma}= \omega^{i 0} \ \eta_{0 0} \ X^{0}+ \omega^{i j} \ \eta_{j i} \ X^{i}=\omega^{i 0}X^0-\omega^{i j}X^j=\beta^i X^0-\omega^{i j}X^j$$

Dealing now with the 3 boosts (generator) matrices and the 3 rotation (generator) matrices 🤨
 
  • #26
JD_PM said:
Mmm why ##\omega^{00} = 0##?
Why? You know that [itex]\omega^{\mu\nu} = - \omega^{\nu\mu}[/itex], do you not? For [itex]\mu = \nu[/itex], you get [itex]\omega^{\mu\mu} = - \omega^{\mu\mu}[/itex], which gives you [itex]2 \omega^{\mu\mu} = 0, \Rightarrow \ \omega^{\mu\mu} = 0[/itex], i.e. [itex]\omega^{00} = \omega^{11} = \omega^{22} = \omega^{33} = 0[/itex].
 
  • Like
Likes JD_PM
  • #27
samalkhaiat said:
I leave you to obtain the 3 boosts (generator) matrices [itex]J_{01} \equiv K_{1}, \ J_{02} \equiv K_{2}, \ J_{03} \equiv K_{3}[/itex] and the 3 rotation (generator) matrices [itex]J_{12} \equiv J_{3}, \ J_{31} \equiv J_{2}, \ J_{23} \equiv J_{1}[/itex].
Plug and chug time! 😅

For ##J_{01} \equiv K_{1}, \ (J_{0 1})^{\rho}{}_{\sigma} = i (\delta^{\rho}_{0} \ \eta_{1 \sigma} - \delta^{\rho}_{1} \ \eta_{0 \sigma})##, where ##\rho## and ##\sigma## are the column/row and row/column entries (I used such notation because it works for either choice), we get

$$J_{01} \equiv K_{1}=
i\begin{pmatrix}
0 & -1 & 0 & 0 \\
-1 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0
\end{pmatrix}
$$

For ##J_{02} \equiv K_{2}, \ (J_{0 2})^{\rho}{}_{\sigma} = i (\delta^{\rho}_{0} \ \eta_{2 \sigma} - \delta^{\rho}_{2} \ \eta_{0 \sigma})## we get

$$J_{02} \equiv K_{2}=
i\begin{pmatrix}
0 & 0 & -1 & 0 \\
0 & 0 & 0 & 0 \\
-1 & 0 & 0 & 0 \\
0 & 0 & 0 & 0
\end{pmatrix}
$$

For ##J_{03} \equiv K_{3}, \ (J_{0 3})^{\rho}{}_{\sigma} = i (\delta^{\rho}_{0} \ \eta_{3 \sigma} - \delta^{\rho}_{3} \ \eta_{0 \sigma})## we get

$$J_{03} \equiv K_{3}=
i\begin{pmatrix}
0 & 0 & 0 & -1 \\
0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 \\
-1 & 0 & 0 & 0
\end{pmatrix}
$$

OK so far. Let's deal now with the 3 rotation (generator) matrices

For ##J_{12} \equiv J_{3}, \ (J_{1 2})^{\rho}{}_{\sigma} = i (\delta^{\rho}_{1} \ \eta_{2 \sigma} - \delta^{\rho}_{2} \ \eta_{1 \sigma})## we get

$$
i\begin{pmatrix}
0 & 0 & 0 & 0 \\
0 & 0 & -1 & 0 \\
0 & -1 & 0 & 0 \\
0 & 0 & 0 & 0
\end{pmatrix}
$$

Which is not what we want. I am missing the sign in the ##2.1.## position. Thinking...
 
  • #28
JD_PM said:
I am missing the sign in the ##2.1.## position. Thinking...
[tex](J_{12})^{2}{}_{1} = i (\delta^{2}_{1} \ \eta_{21} - \delta^{2}_{2} \ \eta_{11}) = -i \ \delta^{2}_{2} \ \eta_{11} = (-i)(+1)(-1) = +i[/tex]
 
  • Like
Likes vanhees71 and JD_PM
  • #29
samalkhaiat said:
[tex](J_{12})^{2}{}_{1} = i (\delta^{2}_{1} \ \eta_{21} - \delta^{2}_{2} \ \eta_{11}) = -i \ \delta^{2}_{2} \ \eta_{11} = (-i)(+1)(-1) = +i[/tex]

Oh so apparently when dealing with rotations we need to pick ##\rho## as the row and ##\sigma## as the column! I made the mistake of not writing explicitly well enough and I will not let that happen again!

For ##J_{31} \equiv J_{3}, \ \text{we have} \ (J_{31})^{3}{}_{1} = i (\delta^{3}_{3} \ \eta_{11} - \delta^{3}_{1} \ \eta_{31}) = -i \ \text{and} \ (J_{31})^{1}{}_{3} = i (\delta^{1}_{3} \ \eta_{13} - \delta^{1}_{1} \ \eta_{33}) = i##. Then

$$
J_{31} \equiv J_{2}=
i\begin{pmatrix}
0 & 0 & 0 & 0 \\
0 & 0 & 0 & 1 \\
0 & 0 & 0 & 0 \\
0 & -1 & 0 & 0
\end{pmatrix}
$$

For ##J_{23} \equiv J_{1}, \ \text{we have} \ (J_{23})^{3}{}_{2} = i (\delta^{3}_{2} \ \eta_{32} - \delta^{3}_{3} \ \eta_{22}) = i \ \text{and} \ (J_{31})^{2}{}_{3} = i (\delta^{2}_{2} \ \eta_{33} - \delta^{2}_{3} \ \eta_{03}) = i##. Then

$$
J_{23} \equiv J_{1}=
i\begin{pmatrix}
0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 \\
0 & 0 & 0 & -1 \\
0 & 0 & 1 & 0
\end{pmatrix}
$$

Oh yes! I am literally celebrating this! 😍😍😍 (I know that for you it is really easy though!). Mathematics is beautiful!
 
  • #30
JD_PM said:
Oh so apparently when dealing with rotations we need to pick ##\rho## as the row and ##\sigma## as the column!
Rotations or non-rotations, by the object [itex](M_{\alpha \beta})^{\mu}{}_{\nu}[/itex], we always mean the [itex](\mu\nu)[/itex] matrix elements of the matrix [itex]M_{\alpha \beta}[/itex].
 
  • Like
Likes dextercioby, vanhees71 and JD_PM
  • #31
samalkhaiat said:
Rotations or non-rotations, by the object [itex](M_{\alpha \beta})^{\mu}{}_{\nu}[/itex], we always mean the [itex](\mu\nu)[/itex] matrix elements of the matrix [itex]M_{\alpha \beta}[/itex].

Alright, so the upper index refers to rows and the lower to columns always. Thanks for the clarification.
 
  • #32
PeroK said:
[...] the Baker-Campbell-Hausdorf formula [...]

I've found a reliable source about the Baker-Campbell-Hausdorf formula: problem 3 here. The solution is here :smile: .
 
  • Like
Likes vanhees71 and PeroK

1. What is the Lorentz Group and why is it important in mathematics?

The Lorentz Group is a mathematical concept that describes the transformations of space and time in special relativity. It is important because it helps us understand the fundamental principles of the universe, such as the constancy of the speed of light and the concept of spacetime.

2. How is the Lorentz Group related to generators?

The Lorentz Group is a Lie group, which means it can be described by a set of generators. These generators are mathematical objects that represent the symmetries of the group and can be used to generate all the elements of the group through a series of operations.

3. What is the mathematical formalism of generators?

The mathematical formalism of generators involves using mathematical operations and equations to describe the generators of a group. This allows us to understand the structure and properties of the group in a more abstract and general way.

4. How do generators relate to the Lorentz Group's transformations?

The generators of the Lorentz Group are related to its transformations through the Lie algebra, which is a mathematical structure that describes the properties of the group's generators. The transformations of the Lorentz Group can be expressed in terms of the generators and their corresponding algebraic operations.

5. What are some applications of discussing the mathematical formalism of generators (Lorentz Group)?

Understanding the mathematical formalism of generators in the context of the Lorentz Group has many applications in physics and engineering. It allows us to make precise predictions about the behavior of physical systems in special relativity, and it is also used in the development of advanced technologies such as particle accelerators and GPS systems.

Similar threads

Replies
6
Views
1K
Replies
3
Views
1K
Replies
24
Views
2K
Replies
3
Views
2K
  • Quantum Physics
Replies
10
Views
2K
Replies
5
Views
2K
Replies
3
Views
1K
  • Advanced Physics Homework Help
Replies
1
Views
1K
Replies
15
Views
2K
  • Advanced Physics Homework Help
Replies
2
Views
1K
Back
Top