Commutator between Casimirs and generators for Lorentz group

Click For Summary

Discussion Overview

The discussion revolves around the commutation relations between the Casimir operators and the generators of the Lorentz group, specifically focusing on proving that the Casimirs commute with all generators of the Lie algebra. The scope includes theoretical aspects of Lie algebras and their representations, as well as mathematical reasoning related to commutators.

Discussion Character

  • Technical explanation
  • Mathematical reasoning
  • Debate/contested

Main Points Raised

  • One participant presents the generators of the Lorentz group and their Lie algebra relations, expressing a desire to prove that the Casimirs commute with all generators, noting specific difficulties with the commutation of ##C_1## and ##K^j##.
  • Another participant agrees with the indirect method proposed for proving the commutation of ##C_1## with ##K^i## and ##C_2##, suggesting that since the ##L^i## are linearly independent, the commutators must vanish.
  • A participant discusses the challenge of using a linear independence argument due to the nature of the coefficients in the expression ##\sum_i[C_1,K^i]L^i=0##, which are not simple numbers but combinations of generators.
  • Jacobi identities are mentioned as a potential tool for proving the result directly, with one participant exploring specific identities but expressing uncertainty about how to proceed.
  • Another participant introduces the idea of viewing the algebra as a vector space and suggests that the expression can be interpreted as an "algebra-valued vector," which could lead to a conclusion about the commutation.
  • One participant raises a concern about the general applicability of the linear independence argument by providing a counterexample involving matrices.
  • A participant provides additional context about the Casimirs in relation to unitary representations of the Lorentz group, discussing properties of the generators and their implications for the commutation relations.

Areas of Agreement / Disagreement

Participants generally agree on the validity of the indirect method for proving commutation but express differing views on the effectiveness of linear independence arguments and the use of Jacobi identities. The discussion remains unresolved regarding the direct proof of ##[C_1, K^j] = 0##.

Contextual Notes

Participants note limitations in their arguments, including the dependence on the specific representations of the generators and the unresolved nature of certain mathematical steps in proving the commutation relations.

julian
Science Advisor
Gold Member
Messages
861
Reaction score
366
The generators ##\{ L^1, L^2 , L^3 , K^1 , K^2 , K^3 \}## of the Lorentz group satisfy the Lie algebra:

\begin{array}{l}
[L^i , L^j] = \epsilon^{ij}_{\;\; k} L^k \\
[L^i , K^j] = \epsilon^{ij}_{\;\; k} K^k \\
[K^i , K^j] = \epsilon^{ij}_{\;\; k} L^k
\end{array}

It has the Casimirs

[tex] C_1 = \sum_i (K^i K^i - L^i L^i) , \qquad C_2 = \sum_i K^i L^i[/tex]

I wish to prove that the Casimirs commute with all of the generators of the Lie algebra. It is easy to prove that ##[C_1 , L^j] = 0##, ##[C_2 , L^j]## and ##[C_2 , K^j] = 0##. However, I'm having more trouble proving ##[C_1 , K^j] = 0##. What I obtain, for example for ##j=1##, is:

[tex] [C_1 , K^1] = 2 [L^2 , K^3]_+ - 2 [L^3 , K^2]_+[/tex]

where ##[\cdot , \cdot]_+## is the anti-commutator. I'm not sure how to prove directly that this vanishes. However, there may be an indirect way of proving ##[C_1 , K^1] = 0##. First note:

\begin{array}{l}
[C_1 , C_2] = \sum_i [K^i K^i - L^i L^i , C_2] \\
\sum_i (K^i [K^i , C_2] + [K^i , C_2] K^i - L^i [L^i , C_2] - [L^i , C_2] L^i) \\
= 0
\end{array}

where we have used ##[C_2 , L^j]## and ##[C_2 , K^j] = 0##. Next write

\begin{array}{l}
0 = [C_1 , C_2] \\
= \sum_i [C_1 , K^i L^i] \\
= \sum_i ([C_1 , K^i] L^i + K^i [C_1 , L^i]) \\
= \sum_i [C_1 , K^i] L^i .
\end{array}

where we have used ##[C_1 , L^j] = 0##.

Is it possible to use this to prove ##[C_1 , K^1] = 0##?

I would prefer to prove first that the Casimirs commutate with all the generators first and then conclude the two Casimirs commute, but if this is what I have to resort to...
 
Last edited:
Physics news on Phys.org
The indirect method certainly isn't wrong, you basically prove that ##C_1## commutates with the ##K^i## and ##C_2## at the same time.
From the final expression you know that ##\sum_i[C_1,K^i]L^i=0## but the ##L^i## are linearly independent so the commutators all vanish.

I haven't looked at it in detail but maybe you can use the Jacobi identity in some way to directly prove that ##[C_1,K^j]=0##.
 
JorisL said:
The indirect method certainly isn't wrong, you basically prove that ##C_1## commutates with the ##K^i## and ##C_2## at the same time.
From the final expression you know that ##\sum_i[C_1,K^i]L^i=0## but the ##L^i## are linearly independent so the commutators all vanish.

I haven't looked at it in detail but maybe you can use the Jacobi identity in some way to directly prove that ##[C_1,K^j]=0##.

Thanks JorisL, I was hoping to try and make some linearly-independent type argument. Now, it would be easy if the ``coefficients" in front of the ##L^i##'s were numbers then we would have an equation like:

[itex] \alpha_1 L^1 + \alpha_2 L^2 + \alpha_3 L^3 = 0[/itex]

then I could do

\begin{array}{l}
0 = \alpha_1 [L^1 , L^2] + \alpha_2 [L^2 , L^2] + \alpha_3 [L^3 , L^2] \\
= \alpha_1 L^3 - \alpha_3 L^1
\end{array}

and then do

\begin{array}{l}
0 = \alpha_1 [L^3 , L^1] - \alpha_3 [L^1 , L^1] \\
= \alpha_1 L^2
\end{array}

implying ##\alpha_1 = 0##.

However, an issue in the sum ##\sum_i[C_1,K^i]L^i=0## is that the ``coefficients" aren't numbers but are combinations of generators. Not sure how to proceed in the above way.

I have also been trying to use the Jacobi identities in order to try to prove the result (i.e. ##[C_1,K^1]=0##) directly, I think the relevant identities to consider would be

\begin{array}{l}
[[L^i,L^j],K^k] + [[L^j,K^k],L^i] + [[K^k,L^i],L^j] = 0 \\
[[K^i,K^j],K^k] + [[K^j,K^k],K^i] + [[K^k,K^i],K^j] = 0
\end{array}

But the only interesting result I seem to be getting from this is the identity that results from setting ##k=2,i=1,j=2## in the first of these Jacobi identities, it gives

[itex] L^3 K^2 - K^2 L^3 - K^3 L^2 + L^2 K^3 = 0[/itex]

(actually this identity follows easily from ##[K^2 , L^3] = - \epsilon^{32}_{\;\; 1} K^1 = \epsilon^{23}_{\;\; 1} K^1 = - [K^3 , L^2]##. Using this identity allows the simplification:

\begin{array}{l}
[C_1 , K^1] = 2 [L^2 , K^3]_+ - 2 [L^3 , K^2]_+ \\
= 4 (K^3 L^2 - L^3 K^2)
\end{array}

but I'm not sure how to proceed from here.
 
Last edited:
In this case I would think of an algebra as a vector space (addition) equipped with a group structure (the multiplication).
If I'm not mistaken you could think of ##\sum_i[C_1,K^i]L^i## as an "algebra-valued vector" that is the components of the vector are elements of the algebra itself.
If that's correct the conclusion follows immediately.

Hailing @Samy_A, @lavinia and @micromass to ensure I'm not giving any wrong information. (I hope not because I used this kind of argument before)
 
JorisL said:
In this case I would think of an algebra as a vector space (addition) equipped with a group structure (the multiplication).
If I'm not mistaken you could think of ##\sum_i[C_1,K^i]L^i## as an "algebra-valued vector" that is the components of the vector are elements of the algebra itself.
If that's correct the conclusion follows immediately.

Hailing @Samy_A, @lavinia and @micromass to ensure I'm not giving any wrong information. (I hope not because I used this kind of argument before)
In general, that doesn't seem to work.
Let's take the Algebra of 2*2 real matrices.
The matrices ##A=\begin{pmatrix}
1 & 0 \\
0 & 1 \end{pmatrix}
## and ##B=\begin{pmatrix}
0& 1 \\
1& 0 \end{pmatrix}
## are linearly independent, yet it is easy to find non-zero matrices ##C, D##, such that ##CA+DB=\begin{pmatrix}
0 & 0 \\
0 & 0 \end{pmatrix}##.
 
By the way these are the Casimirs for an infinite unitary representation of the Lorentz group (in particular the unitary irreducible representations of the principle series, if that means anything to anyone). Unitarity here implies that the generators be anti-hermitian:

[tex] (L^i)^\dagger = - L^i , \qquad (K^i)^\dagger = - K^i .[/tex]

This doesn't seem to help as it implies ## [C_1 , K^1]^\dagger = [C_1 , K^1] ## and ## \big( 2 [L^2,K^3]_+ - 2 [L^3 , K^2]_+ \big)^\dagger = 2 [L^2,K^3]_+ - 2 [L^3 , K^2]_+ ##.

Could it be to do with the particular matrix representation itself? I'm guessing the representation itself could be constructed starting from the fact that the Casimirs commute with each other, commute with all the ##L^i##'s and that ##\sum_i L^i L^i## and ##L^3## commute with each other - and using that commuting operators have simultaneous eigenstates.

Let me explain how I understand Lie algebra matrix representations.

The representation of a Lie algebra is defined as a mapping of the algebra onto linear operators on a vector space, i.e. operatos (matrices) ##\hat{D} (\hat{L}_i)## are assigned to the elements of the Lie algebra ##\hat{L}_i## (generators of the Lie group).

These operators have to satisfy linearity,

[tex] \hat{D} (\alpha \hat{L}_i + \beta \hat{D} (\hat{L}_j) = \alpha \hat{D} (\hat{L}_i) + \beta \hat{D} (\hat{L}_j) \qquad Eq.1[/tex]

and must be homomorphic to the Lie algebra

[tex] \hat{D} ([\hat{L}_i , \hat{L}_j]) = [\hat{D} (\hat{L}_i) , \hat{D} (\hat{L}_i)] \qquad Eq.2[/tex]

In general, a representation in a vector space with basis ##|\{ \phi_k> \}## is obtained by assigning to every operator ##\hat{L}_i##, by means of

[tex] \hat{L}_i |\phi_n> = \sum_m |\phi_m> D (\hat{L}_i)_{mn}[/tex]

a matrix. From this it follows that

\begin{array}{l}
\hat{L}_i \hat{L}_j |\phi_m> = \hat{L}_i \sum_m |\phi_m> D (\hat{L}_i)_{mn} \\
= \sum_m \big( \hat{L}_i |\phi_m> \big) D (\hat{L}_i)_{mn} \\
= \sum_m \big( \sum_p |\phi_p> D (\hat{L}_i)_{pm} \big) D (\hat{L}_i)_{mn} \\
= \sum_p |\phi_p> \Big( \sum_m D (\hat{L}_i)_{pm} D (\hat{L}_i)_{mn} \Big) \\
= \sum_p |\phi_p> D (\hat{L}_i \hat{L}_j)_{pn}
\end{array}

From which it follows that

[tex] \sum_m D (\hat{L}_i)_{pm} D (\hat{L}_i)_{mn} = D (\hat{L}_i \hat{L}_j)_{pn} \qquad Eq 3[/tex]

Hence, the matrix obtained by simple matrix multiplication of ##D (\hat{L}_i)## and ##D (\hat{L}_j)## is equal to the matrix ##D (\hat{L}_i \hat{L}_j)##, assigned to the operator ##\hat{L}_i \hat{L}_j##. If the basis is orthonormalised then the matrix representation is given directly by the scalar product

\begin{array}{l}
<\phi_m| \hat{L}_i |\phi_n> = \sum_p <\phi_m | \phi_p> D (\hat{L}_i)_{pn} \\
= D (\hat{L}_i)_{mn} .
\end{array}

Eq 1. and Eq 2. are automatically satisfied:

\begin{array}{l}
D (\alpha \hat{L}_i + \beta \hat{L}_j) = <\phi_n | \alpha \hat{L}_i + \beta \hat{L}_j |\phi_m> \\
= \alpha <\phi_n | \hat{L}_i |\phi_m> + \beta <\phi_n| \hat{L}_j |\phi_m> \\
= \alpha D (\hat{L}_i) + \beta D (\hat{L}_j)
\end{array}

is satisfied, using this we obtain:

\begin{array}{l}
D ([\hat{L}_i , \hat{L}_j]) = D (\hat{L}_i \hat{L}_j - \hat{L}_j \hat{L}_i) \\
= D (\hat{L}_i \hat{L}_j) - D (\hat{L}_j \hat{L}_i) \\
\end{array}

and using Eq 3,

\begin{array}{l}
D ([\hat{L}_i , \hat{L}_j]) = D (\hat{L}_i) D (\hat{L}_j) - D (\hat{L}_j) D (\hat{L}_i) \\
= [ D(\hat{L}_i) , D(\hat{L}_j)] .
\end{array}

Thereby obtaining a representation.

Any thoughts on this?
 
Last edited:

Similar threads

  • · Replies 10 ·
Replies
10
Views
2K
  • · Replies 19 ·
Replies
19
Views
3K
  • · Replies 3 ·
Replies
3
Views
2K
  • · Replies 1 ·
Replies
1
Views
3K
Replies
1
Views
2K
  • · Replies 5 ·
Replies
5
Views
2K
  • · Replies 19 ·
Replies
19
Views
3K
  • · Replies 1 ·
Replies
1
Views
1K
Replies
8
Views
1K
  • · Replies 2 ·
Replies
2
Views
1K