Proving the Jacobi identity from invariance

ianhoolihan
Messages
144
Reaction score
0
"Proving" the Jacobi identity from invariance

Hi all,

In an informal and heuristic manner, I have heard that the "change" in something is the commutator with it, i.e. \delta A =[J,A] for an operator A where the change is due to the Lorentz transformation U = \exp{\epsilon J} = 1 + \epsilon J + \ldots where J is one of the six generators of the Lorentz group (rotation or boost). That is, if we have an operator \phi\ :\ G\to G, where G is the vector space spanned by the size generators J_i,K_i of the Lorentz group, (i.e. G is the vector representation of the Lorentz algebra) then
\delta (\phi(T)) = \delta\phi (T) + \phi (\delta T)
so, using the above definition of "change"
[J,\phi(T)] = \delta\phi (T) + \phi ([J,T]).

We can then define \phi to be invariant by saying that \delta\phi = 0, and hence

[J,\phi(T)] = \phi([J,T]).

If one does the same for a Lie product \mu(X,Y) = [X,Y] then

\delta\mu(Y,Z) =\delta\mu (Y,Z) + \mu(\delta Y, Z) + \mu(Y,\delta Z)

We say that \mu is invariant and set \delta\mu = 0 and hence

[J,\mu(Y,Z)] =\mu([J,Y], Z) + \mu(Y,[J, Z])
or

[J,[Y,Z]] =[[J,Y], Z] + [Y,[J, Z]]

which is the Jacobi identity. This seems great, but I don't understand a few points.

1. I believe the Lie product commutator enters as if we have an operator A on the vectors in the Lorentz group (e.g. Minkowski space), it must change as
A\to A' = UAU^{-1} = A + \epsilon [J,A] + \ldots
correct? But in the above description with \phi and \mu, these are operators on the Lorentz algebra, which I thought would remain unchanged.

2. Is the expression
\delta\mu(Y,Z) =\delta\mu (Y,Z) + \mu(\delta Y, Z) + \mu(Y,\delta Z)
rigourous? What about terms like \mu(\delta Y, \delta Z)? Or are those second order?

Any help would be great,

Ianhoolihan
 
Physics news on Phys.org


Can you please clarify the following?
What is T? Can you give me a specific example of \phi (T)?
Why is it that \delta acts on both \phi and its “argument” T? It looks like that you defined \phi to be a Lie algebra representation or a Lie algebra-valued operator! So, if \exp (\epsilon \phi) is not the identity, what does it mean to set \delta \phi = 0? The same goes for \mu; I can take it to be the linear map (\mu(X))(Y) = [X,Y], defined on the lie algebra such that (representing the lie algebra in itself)
\mu([X,Y]) = [\mu(X),\mu(Y)].
You can check that such map guarantees the Jacobi identity; expand both sides of
\mu([X,Y])(Z) = [\mu(X),\mu(Y)](Z).
So, what does it mean to set \delta \mu = 0?
In the same way, we can define the action of \delta_{J} on lie algebra element X(or, on a lie algebra-valued function f(x;X)) by
\delta_{J}X = [J,X].
This means that \delta_{J} acts as derivation, i.e., it guarantees Jacobi identity. This is because Lie brackets are derivations;
\delta_{J}[X,Y] = \delta_{J}(XY) - \delta_{J}(YX),
This gives the Jacobi identity:
\delta_{J}[X,Y] = [J,XY] - [J,YX] = [X,\delta_{J}Y] + [\delta_{J}X, Y].

Sam
 


Thanks Sam.

samalkhaiat said:
Can you please clarify the following?
What is T?

T is a vector in the vector space of generators for the algebra. An example would be T=J_i, K_i in the Galilean algebra.

samalkhaiat said:
Can you give me a specific example of \phi (T)?

Say, the trivial one: \phi(T)=T or \phi(T) = c T for some constant c.

samalkhaiat said:
Why is it that \delta acts on both \phi and its “argument” T?

That is my question 2. I can only reason along the lines of the chain/product rule. A variation in an evaluated function must depend on the variation in the function, and the variation in the thing it is acting on. Formalising this is what I'm looking for.

samalkhaiat said:
It looks like that you defined \phi to be a Lie algebra representation or a Lie algebra-valued operator! So, if \exp (\epsilon \phi) is not the identity, what does it mean to set \delta \phi = 0?

\exp (\epsilon \phi) is not the identity. I believe the point is what I made above --- under a given transformation of the vector space, both the vectors \phi is operating on, and the 1--cochain \phi may depend on the transformation. In differential geometric language, this would be like saying that both the 1--form and vector bases transform, and so may the "coefficients". Setting \delta \phi = 0 means the coefficients do not change. I'm not even 50% sure myself, however.

samalkhaiat said:
The same goes for \mu; I can take it to be the linear map (\mu(X))(Y) = [X,Y], defined on the lie algebra such that (representing the lie algebra in itself)
\mu([X,Y]) = [\mu(X),\mu(Y)].
You can check that such map guarantees the Jacobi identity; expand both sides of
\mu([X,Y])(Z) = [\mu(X),\mu(Y)](Z).
So, what does it mean to set \delta \mu = 0?
In the same way, we can define the action of \delta_{J} on lie algebra element X(or, on a lie algebra-valued function f(x;X)) by
\delta_{J}X = [J,X].
This means that \delta_{J} acts as derivation, i.e., it guarantees Jacobi identity. This is because Lie brackets are derivations;
\delta_{J}[X,Y] = \delta_{J}(XY) - \delta_{J}(YX),
This gives the Jacobi identity:
\delta_{J}[X,Y] = [J,XY] - [J,YX] = [X,\delta_{J}Y] + [\delta_{J}X, Y].

Sam

I do not know what you mean by representing a lie algebra in itself, but, as far as I am concerned, the lie product \mu is a bilinear antisymmetric 2--cochain. Yours is a 1--cochain it seems...?

Anyway, it turns out that I wasn't as clued up about Lie algebras etc as I thought. I still am not, but will look again at Kirillov soon. If you've got any comments on this question, however, they'd still be much appreciated.

Ianhoolihan
 
Back
Top