Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

Proving the Jacobi identity from invariance

  1. Apr 17, 2012 #1
    "Proving" the Jacobi identity from invariance

    Hi all,

    In an informal and heuristic manner, I have heard that the "change" in something is the commutator with it, i.e. [itex]\delta A =[J,A][/itex] for an operator [itex]A[/itex] where the change is due to the Lorentz transformation [itex]U = \exp{\epsilon J} = 1 + \epsilon J + \ldots[/itex] where [itex]J[/itex] is one of the six generators of the Lorentz group (rotation or boost). That is, if we have an operator [itex]\phi\ :\ G\to G[/itex], where [itex]G[/itex] is the vector space spanned by the size generators [itex]J_i,K_i[/itex] of the Lorentz group, (i.e. [itex]G[/itex] is the vector representation of the Lorentz algebra) then
    [tex]\delta (\phi(T)) = \delta\phi (T) + \phi (\delta T)[/tex]
    so, using the above definition of "change"
    [tex][J,\phi(T)] = \delta\phi (T) + \phi ([J,T])[/tex].

    We can then define [itex]\phi[/itex] to be invariant by saying that [itex]\delta\phi = 0[/itex], and hence

    [tex][J,\phi(T)] = \phi([J,T])[/tex].

    If one does the same for a Lie product [itex]\mu(X,Y) = [X,Y][/itex] then

    [tex]\delta\mu(Y,Z) =\delta\mu (Y,Z) + \mu(\delta Y, Z) + \mu(Y,\delta Z)[/tex]

    We say that [itex]\mu[/itex] is invariant and set [itex]\delta\mu = 0[/itex] and hence

    [tex][J,\mu(Y,Z)] =\mu([J,Y], Z) + \mu(Y,[J, Z])[/tex]

    [tex][J,[Y,Z]] =[[J,Y], Z] + [Y,[J, Z]][/tex]

    which is the Jacobi identity. This seems great, but I don't understand a few points.

    1. I believe the Lie product commutator enters as if we have an operator [itex]A[/itex] on the vectors in the Lorentz group (e.g. Minkowski space), it must change as
    [tex]A\to A' = UAU^{-1} = A + \epsilon [J,A] + \ldots[/tex]
    correct? But in the above description with [itex]\phi[/itex] and [itex]\mu[/itex], these are operators on the Lorentz algebra, which I thought would remain unchanged.

    2. Is the expression
    [tex]\delta\mu(Y,Z) =\delta\mu (Y,Z) + \mu(\delta Y, Z) + \mu(Y,\delta Z)[/tex]
    rigourous? What about terms like [itex]\mu(\delta Y, \delta Z)[/itex]? Or are those second order?

    Any help would be great,

  2. jcsd
  3. Apr 22, 2012 #2


    User Avatar
    Science Advisor

    Re: "Proving" the Jacobi identity from invariance

    Can you please clarify the following?
    What is [itex]T[/itex]? Can you give me a specific example of [itex]\phi (T)[/itex]?
    Why is it that [itex]\delta[/itex] acts on both [itex]\phi[/itex] and its “argument” [itex]T[/itex]? It looks like that you defined [itex]\phi[/itex] to be a Lie algebra representation or a Lie algebra-valued operator! So, if [itex]\exp (\epsilon \phi)[/itex] is not the identity, what does it mean to set [itex]\delta \phi = 0[/itex]? The same goes for [itex]\mu[/itex]; I can take it to be the linear map [itex](\mu(X))(Y) = [X,Y][/itex], defined on the lie algebra such that (representing the lie algebra in itself)
    [tex]\mu([X,Y]) = [\mu(X),\mu(Y)].[/tex]
    You can check that such map guarantees the Jacobi identity; expand both sides of
    [tex]\mu([X,Y])(Z) = [\mu(X),\mu(Y)](Z).[/tex]
    So, what does it mean to set [itex]\delta \mu = 0[/itex]?
    In the same way, we can define the action of [itex]\delta_{J}[/itex] on lie algebra element [itex]X[/itex](or, on a lie algebra-valued function [itex]f(x;X)[/itex]) by
    [tex]\delta_{J}X = [J,X].[/tex]
    This means that [itex]\delta_{J}[/itex] acts as derivation, i.e., it guarantees Jacobi identity. This is because Lie brackets are derivations;
    [tex]\delta_{J}[X,Y] = \delta_{J}(XY) - \delta_{J}(YX),[/tex]
    This gives the Jacobi identity:
    [tex]\delta_{J}[X,Y] = [J,XY] - [J,YX] = [X,\delta_{J}Y] + [\delta_{J}X, Y].[/tex]

  4. Apr 22, 2012 #3
    Re: "Proving" the Jacobi identity from invariance

    Thanks Sam.

    [itex]T[/itex] is a vector in the vector space of generators for the algebra. An example would be [itex]T=J_i, K_i[/itex] in the Galilean algebra.

    Say, the trivial one: [itex]\phi(T)=T[/itex] or [itex]\phi(T) = c T[/itex] for some constant [itex]c[/itex].

    That is my question 2. I can only reason along the lines of the chain/product rule. A variation in an evaluated function must depend on the variation in the function, and the variation in the thing it is acting on. Formalising this is what I'm looking for.

    [itex]\exp (\epsilon \phi)[/itex] is not the identity. I believe the point is what I made above --- under a given transformation of the vector space, both the vectors [itex]\phi[/itex] is operating on, and the 1--cochain [itex]\phi[/itex] may depend on the transformation. In differential geometric language, this would be like saying that both the 1--form and vector bases transform, and so may the "coefficients". Setting [itex]\delta \phi = 0[/itex] means the coefficients do not change. I'm not even 50% sure myself, however.

    I do not know what you mean by representing a lie algebra in itself, but, as far as I am concerned, the lie product [itex]\mu[/itex] is a bilinear antisymmetric 2--cochain. Yours is a 1--cochain it seems...?

    Anyway, it turns out that I wasn't as clued up about Lie algebras etc as I thought. I still am not, but will look again at Kirillov soon. If you've got any comments on this question, however, they'd still be much appreciated.

Share this great discussion with others via Reddit, Google+, Twitter, or Facebook