Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

Simple question on Lie algebras

  1. Jul 10, 2008 #1
    Hey there!

    I have a simple question concerning infinitesimal generators: In order to get properties of the group, one always linearizes the group element ([itex]g = \exp(\mathrm{i}\theta^a T^a) = 1 + \theta^a T^a + \hdots[/itex]); in this way, one can show, say, that the antisymmetric matrices (where both indices transform in the same representation given by the generators [itex]T^a[/itex]) form an invariant subspace by computing

    [tex]\varphi_{ij} \to \varphi'_{ij} = \varphi_{ij} + {(\theta^a T^a)_i}^k {(\theta^a T^a)_j}^\ell \varphi_{k\ell} = \hdots[/tex]

    Why is it allowed to drop all orders [itex]\theta^2[/itex] and higher during the computation? Wouldn't that mean that (in the above case) antisymmetry is preserved only to first order in [itex]\theta[/itex]? How do I know that antisymmetry is preserved in higher orders?
  2. jcsd
  3. Jul 10, 2008 #2
    Well, you need a Lie algebra lecture book. I don't remember the name of the theorem. The basics is that the algebra of generators really tells you everything you need to know about the (connected to the unity part of the) group. The main reason is that, when you calculate the product of two elements of the group, each of them being represented by the exponential map in terms of generators, then the product can also be represented with a combination of generators in the exponential map. This is said terribly, but I hope this gives you a rough idea.

    Another way : if [tex]A=e^{X}[/tex] and [tex]B=e^{Y}[/tex], then [tex]\log(AB)=X+Y+\frac{1}{2}[X,Y]+\cdots[/tex] where the dots can be calculated again in terms of commutators of [tex]X[/tex] and [tex]Y[/tex]. Applied to the representation of a Lie group in terms of exponential of its Lie algebra, you surely can see immediately why "the generators won't miss any element of the (connected to the unity part of the) group". I'm sure someone knows the name of this theorem. I could not find it on wiki easily, so I gave up.
  4. Jul 10, 2008 #3
    You mean the Baker-Campbell-Hausdorff identity, right?
  5. Jul 10, 2008 #4
    Yes, that's the name :smile:

    That is the reason we can get away with only first order expansion or infinitesimal rotations in Lie groups to calculate full rotations : we will not "forget" along the way anything else than what is included in the generator's commutators. Does it make sens ?
  6. Jul 10, 2008 #5
    Because any property satisfied to first order in the generators will hold for the exponent. This is a general property of all one-parameter subgroups such as those you've described. Look at the exponent this way:
    [tex]e^{\lambda T}=(1+\frac{\lambda}{N}T)^{N}=(1+\delta\lambda T)^{N}[/tex]
    for [tex]N\rightarrow\infty[/tex]

    This makes explicit the fact that advancements along the orbit space of the group are given by N actions of [tex](1+\delta\lambda T)[/tex]. So an equation valid only for this linear operator will remain valid after N applications of this operator, that is, it remains valid for the exponentiation of the operator T. (you can see also that it is ONLY a property of exponentiated operators).
    Last edited: Jul 10, 2008
  7. Jul 11, 2008 #6
    Okay, thanks to you all! My question is answered! :)
  8. Jul 28, 2008 #7
    I trying to achieve the same derivation just with the definitions of taylor series, and stating but not proving the generator of [tex] \mbox{SO}(2)[/tex]

    Beginning with the stipulation that we are dealing with elements of the identity connected component of a Lie Group, so that 1 always refers to the identity.

    [tex] g(\theta)=\sum_{n=0}^\infty \frac{1}{n!}\partial^n_{\theta} g(a)(\theta-a)^n[/tex]

    [tex] g(\theta)= g(a) + \sum_{n=1}^\infty \frac{1}{n!}\partial^n_{\theta} g(a) (\theta-a)^n[/tex]

    So that, choosing infinitesimal changes around [tex]a=0[/tex]

    [tex] g(\delta \theta)= g(0) + \sum_{n=1}^\infty \frac{1}{n!}\partial^n_{\theta} g(a)(\delta \theta)^n[/tex]

    By convention, [tex]g(0)=1[/tex] refers to the identity as per the above stipulation

    [tex] g(\delta \theta)= I + \sum_{n=1}^\infty \frac{1}{n!} \partial^n_{\theta} g(a) (\delta \theta)^n[/tex]

    Now, for simplicity, dealing specifically with [tex] \mbox{SO}(2)[/tex] say, we can write and [tex]g \in \mbox{SO}(2)[/tex] as:

    [tex]g(\theta) = e^{i \theta \tau}[/tex] where [tex]\tau = \sigma_y[/tex]

    Or in summation,

    [tex]g(\theta) = e^{i \theta \tau}=g(0) + \sum_{n=1}^\infty \frac{1}{n!}(i \theta \tau)^n[/tex]

    So, infinitesimal changes

    [tex]g(\delta \theta) = e^{i \delta \theta \tau}=g(0) + \sum_{n=1}^\infty \frac{1}{n!}(i \delta\theta \tau)^n[/tex]

    Equating this with the expression with the above derivation, we get (along say [tex]n=1[/tex])

    [tex]\tau = -i \partial_\theta g(0)[/tex]

    Is this oh so very very wrong?
  9. Jul 29, 2008 #8
    Just so you know, the reason why I am asking is because the n! is causing grief in the following form given for the element [tex]g(\delta \theta)[/tex]

    [tex]g(\delta\theta) = 1 + i \sum^{n}_{i=1}\tau^i \delta\theta^i[/tex]

    The issue I have with this is

    a) He just stated that this is the form that [tex]g(\delta\theta)[/tex] takes
    b) is there not an n! missing?
    c) Should the i [tex](=\sqrt{-1})[/tex] not be brought and raised to the power i (index)

    or can somebody justify this form of [tex]g(\delta \theta)[/tex]?

    I mean if I used this and went on the same route as above to find the form of [tex]\tau[/tex], I wouldn't get the same answer. It would have an n! in it as the Taylor Series has one.
    Last edited: Jul 29, 2008
  10. Jul 29, 2008 #9
    All this is answered by dropping all second order terms. A conclusion I didn't want to arrive at.
Share this great discussion with others via Reddit, Google+, Twitter, or Facebook