I Commutator between Casimirs and generators for Lorentz group

julian
Science Advisor
Gold Member
Messages
857
Reaction score
361
The generators ##\{ L^1, L^2 , L^3 , K^1 , K^2 , K^3 \}## of the Lorentz group satisfy the Lie algebra:

\begin{array}{l}
[L^i , L^j] = \epsilon^{ij}_{\;\; k} L^k \\
[L^i , K^j] = \epsilon^{ij}_{\;\; k} K^k \\
[K^i , K^j] = \epsilon^{ij}_{\;\; k} L^k
\end{array}

It has the Casimirs

<br /> C_1 = \sum_i (K^i K^i - L^i L^i) , \qquad C_2 = \sum_i K^i L^i<br />

I wish to prove that the Casimirs commute with all of the generators of the Lie algebra. It is easy to prove that ##[C_1 , L^j] = 0##, ##[C_2 , L^j]## and ##[C_2 , K^j] = 0##. However, I'm having more trouble proving ##[C_1 , K^j] = 0##. What I obtain, for example for ##j=1##, is:

<br /> [C_1 , K^1] = 2 [L^2 , K^3]_+ - 2 [L^3 , K^2]_+<br />

where ##[\cdot , \cdot]_+## is the anti-commutator. I'm not sure how to prove directly that this vanishes. However, there may be an indirect way of proving ##[C_1 , K^1] = 0##. First note:

\begin{array}{l}
[C_1 , C_2] = \sum_i [K^i K^i - L^i L^i , C_2] \\
\sum_i (K^i [K^i , C_2] + [K^i , C_2] K^i - L^i [L^i , C_2] - [L^i , C_2] L^i) \\
= 0
\end{array}

where we have used ##[C_2 , L^j]## and ##[C_2 , K^j] = 0##. Next write

\begin{array}{l}
0 = [C_1 , C_2] \\
= \sum_i [C_1 , K^i L^i] \\
= \sum_i ([C_1 , K^i] L^i + K^i [C_1 , L^i]) \\
= \sum_i [C_1 , K^i] L^i .
\end{array}

where we have used ##[C_1 , L^j] = 0##.

Is it possible to use this to prove ##[C_1 , K^1] = 0##?

I would prefer to prove first that the Casimirs commutate with all the generators first and then conclude the two Casimirs commute, but if this is what I have to resort to...
 
Last edited:
Physics news on Phys.org
The indirect method certainly isn't wrong, you basically prove that ##C_1## commutates with the ##K^i## and ##C_2## at the same time.
From the final expression you know that ##\sum_i[C_1,K^i]L^i=0## but the ##L^i## are linearly independent so the commutators all vanish.

I haven't looked at it in detail but maybe you can use the Jacobi identity in some way to directly prove that ##[C_1,K^j]=0##.
 
JorisL said:
The indirect method certainly isn't wrong, you basically prove that ##C_1## commutates with the ##K^i## and ##C_2## at the same time.
From the final expression you know that ##\sum_i[C_1,K^i]L^i=0## but the ##L^i## are linearly independent so the commutators all vanish.

I haven't looked at it in detail but maybe you can use the Jacobi identity in some way to directly prove that ##[C_1,K^j]=0##.

Thanks JorisL, I was hoping to try and make some linearly-independent type argument. Now, it would be easy if the ``coefficients" in front of the ##L^i##'s were numbers then we would have an equation like:

<br /> \alpha_1 L^1 + \alpha_2 L^2 + \alpha_3 L^3 = 0<br />

then I could do

\begin{array}{l}
0 = \alpha_1 [L^1 , L^2] + \alpha_2 [L^2 , L^2] + \alpha_3 [L^3 , L^2] \\
= \alpha_1 L^3 - \alpha_3 L^1
\end{array}

and then do

\begin{array}{l}
0 = \alpha_1 [L^3 , L^1] - \alpha_3 [L^1 , L^1] \\
= \alpha_1 L^2
\end{array}

implying ##\alpha_1 = 0##.

However, an issue in the sum ##\sum_i[C_1,K^i]L^i=0## is that the ``coefficients" aren't numbers but are combinations of generators. Not sure how to proceed in the above way.

I have also been trying to use the Jacobi identities in order to try to prove the result (i.e. ##[C_1,K^1]=0##) directly, I think the relevant identities to consider would be

\begin{array}{l}
[[L^i,L^j],K^k] + [[L^j,K^k],L^i] + [[K^k,L^i],L^j] = 0 \\
[[K^i,K^j],K^k] + [[K^j,K^k],K^i] + [[K^k,K^i],K^j] = 0
\end{array}

But the only interesting result I seem to be getting from this is the identity that results from setting ##k=2,i=1,j=2## in the first of these Jacobi identities, it gives

<br /> L^3 K^2 - K^2 L^3 - K^3 L^2 + L^2 K^3 = 0<br />

(actually this identity follows easily from ##[K^2 , L^3] = - \epsilon^{32}_{\;\; 1} K^1 = \epsilon^{23}_{\;\; 1} K^1 = - [K^3 , L^2]##. Using this identity allows the simplification:

\begin{array}{l}
[C_1 , K^1] = 2 [L^2 , K^3]_+ - 2 [L^3 , K^2]_+ \\
= 4 (K^3 L^2 - L^3 K^2)
\end{array}

but I'm not sure how to proceed from here.
 
Last edited:
In this case I would think of an algebra as a vector space (addition) equipped with a group structure (the multiplication).
If I'm not mistaken you could think of ##\sum_i[C_1,K^i]L^i## as an "algebra-valued vector" that is the components of the vector are elements of the algebra itself.
If that's correct the conclusion follows immediately.

Hailing @Samy_A, @lavinia and @micromass to ensure I'm not giving any wrong information. (I hope not because I used this kind of argument before)
 
JorisL said:
In this case I would think of an algebra as a vector space (addition) equipped with a group structure (the multiplication).
If I'm not mistaken you could think of ##\sum_i[C_1,K^i]L^i## as an "algebra-valued vector" that is the components of the vector are elements of the algebra itself.
If that's correct the conclusion follows immediately.

Hailing @Samy_A, @lavinia and @micromass to ensure I'm not giving any wrong information. (I hope not because I used this kind of argument before)
In general, that doesn't seem to work.
Let's take the Algebra of 2*2 real matrices.
The matrices ##A=\begin{pmatrix}
1 & 0 \\
0 & 1 \end{pmatrix}
## and ##B=\begin{pmatrix}
0& 1 \\
1& 0 \end{pmatrix}
## are linearly independent, yet it is easy to find non-zero matrices ##C, D##, such that ##CA+DB=\begin{pmatrix}
0 & 0 \\
0 & 0 \end{pmatrix}##.
 
By the way these are the Casimirs for an infinite unitary representation of the Lorentz group (in particular the unitary irreducible representations of the principle series, if that means anything to anyone). Unitarity here implies that the generators be anti-hermitian:

<br /> (L^i)^\dagger = - L^i , \qquad (K^i)^\dagger = - K^i .<br />

This doesn't seem to help as it implies ## [C_1 , K^1]^\dagger = [C_1 , K^1] ## and ## \big( 2 [L^2,K^3]_+ - 2 [L^3 , K^2]_+ \big)^\dagger = 2 [L^2,K^3]_+ - 2 [L^3 , K^2]_+ ##.

Could it be to do with the particular matrix representation itself? I'm guessing the representation itself could be constructed starting from the fact that the Casimirs commute with each other, commute with all the ##L^i##'s and that ##\sum_i L^i L^i## and ##L^3## commute with each other - and using that commuting operators have simultaneous eigenstates.

Let me explain how I understand Lie algebra matrix representations.

The representation of a Lie algebra is defined as a mapping of the algebra onto linear operators on a vector space, i.e. operatos (matrices) ##\hat{D} (\hat{L}_i)## are assigned to the elements of the Lie algebra ##\hat{L}_i## (generators of the Lie group).

These operators have to satisfy linearity,

<br /> \hat{D} (\alpha \hat{L}_i + \beta \hat{D} (\hat{L}_j) = \alpha \hat{D} (\hat{L}_i) + \beta \hat{D} (\hat{L}_j) \qquad Eq.1<br />

and must be homomorphic to the Lie algebra

<br /> \hat{D} ([\hat{L}_i , \hat{L}_j]) = [\hat{D} (\hat{L}_i) , \hat{D} (\hat{L}_i)] \qquad Eq.2<br />

In general, a representation in a vector space with basis ##|\{ \phi_k> \}## is obtained by assigning to every operator ##\hat{L}_i##, by means of

<br /> \hat{L}_i |\phi_n&gt; = \sum_m |\phi_m&gt; D (\hat{L}_i)_{mn}<br />

a matrix. From this it follows that

\begin{array}{l}
\hat{L}_i \hat{L}_j |\phi_m> = \hat{L}_i \sum_m |\phi_m> D (\hat{L}_i)_{mn} \\
= \sum_m \big( \hat{L}_i |\phi_m> \big) D (\hat{L}_i)_{mn} \\
= \sum_m \big( \sum_p |\phi_p> D (\hat{L}_i)_{pm} \big) D (\hat{L}_i)_{mn} \\
= \sum_p |\phi_p> \Big( \sum_m D (\hat{L}_i)_{pm} D (\hat{L}_i)_{mn} \Big) \\
= \sum_p |\phi_p> D (\hat{L}_i \hat{L}_j)_{pn}
\end{array}

From which it follows that

<br /> \sum_m D (\hat{L}_i)_{pm} D (\hat{L}_i)_{mn} = D (\hat{L}_i \hat{L}_j)_{pn} \qquad Eq 3<br />

Hence, the matrix obtained by simple matrix multiplication of ##D (\hat{L}_i)## and ##D (\hat{L}_j)## is equal to the matrix ##D (\hat{L}_i \hat{L}_j)##, assigned to the operator ##\hat{L}_i \hat{L}_j##. If the basis is orthonormalised then the matrix representation is given directly by the scalar product

\begin{array}{l}
<\phi_m| \hat{L}_i |\phi_n> = \sum_p <\phi_m | \phi_p> D (\hat{L}_i)_{pn} \\
= D (\hat{L}_i)_{mn} .
\end{array}

Eq 1. and Eq 2. are automatically satisfied:

\begin{array}{l}
D (\alpha \hat{L}_i + \beta \hat{L}_j) = <\phi_n | \alpha \hat{L}_i + \beta \hat{L}_j |\phi_m> \\
= \alpha <\phi_n | \hat{L}_i |\phi_m> + \beta <\phi_n| \hat{L}_j |\phi_m> \\
= \alpha D (\hat{L}_i) + \beta D (\hat{L}_j)
\end{array}

is satisfied, using this we obtain:

\begin{array}{l}
D ([\hat{L}_i , \hat{L}_j]) = D (\hat{L}_i \hat{L}_j - \hat{L}_j \hat{L}_i) \\
= D (\hat{L}_i \hat{L}_j) - D (\hat{L}_j \hat{L}_i) \\
\end{array}

and using Eq 3,

\begin{array}{l}
D ([\hat{L}_i , \hat{L}_j]) = D (\hat{L}_i) D (\hat{L}_j) - D (\hat{L}_j) D (\hat{L}_i) \\
= [ D(\hat{L}_i) , D(\hat{L}_j)] .
\end{array}

Thereby obtaining a representation.

Any thoughts on this?
 
Last edited:
I asked online questions about Proposition 2.1.1: The answer I got is the following: I have some questions about the answer I got. When the person answering says: ##1.## Is the map ##\mathfrak{q}\mapsto \mathfrak{q} A _\mathfrak{p}## from ##A\setminus \mathfrak{p}\to A_\mathfrak{p}##? But I don't understand what the author meant for the rest of the sentence in mathematical notation: ##2.## In the next statement where the author says: How is ##A\to...
The following are taken from the two sources, 1) from this online page and the book An Introduction to Module Theory by: Ibrahim Assem, Flavio U. Coelho. In the Abelian Categories chapter in the module theory text on page 157, right after presenting IV.2.21 Definition, the authors states "Image and coimage may or may not exist, but if they do, then they are unique up to isomorphism (because so are kernels and cokernels). Also in the reference url page above, the authors present two...
##\textbf{Exercise 10}:## I came across the following solution online: Questions: 1. When the author states in "that ring (not sure if he is referring to ##R## or ##R/\mathfrak{p}##, but I am guessing the later) ##x_n x_{n+1}=0## for all odd $n$ and ##x_{n+1}## is invertible, so that ##x_n=0##" 2. How does ##x_nx_{n+1}=0## implies that ##x_{n+1}## is invertible and ##x_n=0##. I mean if the quotient ring ##R/\mathfrak{p}## is an integral domain, and ##x_{n+1}## is invertible then...

Similar threads

Back
Top