Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

I SU(2) generators

  1. Jul 28, 2017 #1
    Hello! I am reading some Lie Algebra and at a point the author says that for a vector with 3 cartesian components ##V_i## i =1,2,3 the commutation relations with the generators of rotation are: ##[J_i,V_j]=i\epsilon_{ijk}V_k##. Can someone explain this to me? I am confused as ##V_j## is a number while ##J_i## is a 3x3 matrix. So the commutator shouldn't be 0? Thank you!
  2. jcsd
  3. Jul 28, 2017 #2


    User Avatar
    Science Advisor
    Gold Member

    Before you define a matrix representation the J's have a Lie product (written as a commutator) which will then within a representation correspond to the actual commutator. You can represent these generators as 2x2 complex matrices or as 3x3 matrices, as larger matrices in a tensor representation or as elements of some other algebra such as a Clifford algebra. The indices however are indicating which matrix and hence which generator in the Lie algebra.

    I understand your confusion as to the Cartesian components of a vector. It seems to me that (assuming your transcribing correctly) the author is defining the representation of of the generators on vectors by defining a specific action on components. This is not unprecedented. You can for example define a position vector as a function of coordinates [itex] \mathbf{r} = x\mathbf{i} + y\mathbf{j}+z\mathbf{k}[/itex] and then rather than defining rotations as actions on the basis vectors, define it as an action on the components.
    [tex] iJ_1 = y\frac{\partial}{\partial z} - z\frac{\partial}{ \partial y}[/tex]
    and likewise for cyclic permutations. In this way, though the variables are "just numbers" in one sense they are, as variables, functions of the set of independent variables and as such can be acted upon non-trivially by operators.

    However this is an awkward way to go about it for general vector components and my first instinct is to ask whether you possibly misunderstood the author. A close second is the notion that the author and or textbook editor has been unforgivably sloppy. Would you please give the reference to the author and text where this occurs.
  4. Jul 28, 2017 #3
    Thank you for your reply. The book is called Lie Groups and Lie Algebras - A Physicist's Perspective by Adam M. Bincer and this appears in chapter 4 (page 37). However by reading further I think that by vector he means a vector of operators, i.e. the angular momentum vector is a vector ##(J_1,J_2,J_3)## where ##J_i## are elements of the vector, but they are not numbers, but also matrices. In this way what he describes makes sense, as both ##J_i## and ##V_i## are matrices of the same dimensionality.
  5. Jul 28, 2017 #4


    Staff: Mentor

    As you seem to know, may I take the opportunity and ask what is meant by "generator"? We have a group ##SU(2)\,,## its adjoint representation ##\operatorname{Ad}(SU(2)) \subseteq GL(\mathfrak{su}(2))##, the Lie algebras ##\mathfrak{su}(2)## and its adjoint representation ##\operatorname{ad}(\mathfrak{su}(2)) \subseteq \mathfrak{gl}(\mathfrak{su}(2))##. Which of these elements are meant by "generators"?

    I assumed the ##V_i## in the OP to be the Pauli matrices or their ##i-##multiples. But what is the rotation ##J_k## then? As the commutator relation suggests, they all have to be either in ##\mathfrak{su}(2)## or ##\operatorname{ad}(\mathfrak{su}(2))##. But those are three dimensional, which makes it somehow unusual to name all ##V_i\; , \;J_k## as elements of the same algebra.
  6. Jul 28, 2017 #5


    User Avatar
    Science Advisor
    Gold Member

    That makes sense but still sounds a bit sloppy mathematically (which is a bad habit of physicists complementary to the tendency of mathematicians to be too pedantic at times.)

    I would red flag the author for calling them components, even in the stated usage. Call them "entries" is my suggestion, as in entries in a row matrix or column matrix. I've seen the textbook but not read it.

    I would add that the three angular momentum generators do form a vector of sorts but are actually a "bi-vector" = rank 2 anti-symmetric tensor which can map to a vector in 3 dimensions under something called Hodge duality. This is what they are in terms of how they transform under the general linear group in which one naturally embeds the orthogonal group of rotations (and reflections) via the principle vector representation.

    The vectors [itex]\mathbf{V}_k[/itex] will transform as true vectors so are different elements of the matrix (or other) representation algebra. Otherwise the author would have use the same label [itex]\mathbf{V}=\mathbf{J}[/itex]. But one needs to know this before hand to better understand the author's meaning so that's the author's "bad" since this business of different representations is something that comes later.
  7. Jul 28, 2017 #6


    User Avatar
    Science Advisor
    Gold Member

    The elements of a Lie algebra are generators for the elements of a Lie group. The process is the exponential map within any algebraic representation.
    For the example of rotations in the plane (about the origin), you have the generator:
    [tex] \eta = \left( \begin{array}{cc} 0 & -1\\ 1 & 0\end{array}\right)[/tex]
    where we represent position vectors as 2x1 matrices (column vectors).
    This is the single element of the [itex]\mathfrak{so}(2)\simeq \mathfrak{u}(1)[/itex] Lie algebra. It generates the Lie group [itex]SO(2) = U(1)[/itex] via the exponential map:
    [tex] R_\theta = \left(\begin{array}{cc} \cos(\theta) & -\sin(\theta)\\ \sin(\theta) & \cos(\theta)\end{array}\right) =
    \exp\left( \theta \cdot \eta \right)[/tex]
    You will also note that algebraically [itex]\eta[/itex] behaves just like the imaginary unit [itex]i[/itex] as it squares to minus the multiplicative identity in its algebra.
    This isomorphism is the [itex] SO(2)\simeq U(1)[/itex] isomorphism.

    But in their definitions [itex]SO(2)[/itex] and [itex]\mathfrak{so}(2)[/itex] are not matrices or complex numbers, they are pure algebraic objects defined wholy by themselves and relative to each other. The complex number or 2x2 matrix or other formal representation is a RE presentation, a homomorphism of the Lie group/Lie algebra into another algebra.
  8. Jul 28, 2017 #7


    Staff: Mentor

    So let me make it clear. Generators are essentially the vectors of the corresponding Lie algebra, or geometrically the tangents of curves through ##1##. Are only the vectors of a certain basis noted as generators or any?

    And what is meant be "generator of a representation" and especially the adjoint representation?
    Are they ##\operatorname{Ad}_g \in GL(\mathfrak{g})## or ##\operatorname{ad}X \in \mathfrak{gl}(\mathfrak{g})\,##? Consequently it would have to be the latter as the former isn't a Lie algebra (although embedded in one). However, I sometimes got the impression it is ##\operatorname{Ad}_g\,##.

    And finally, where do the ##J_k## and ##V_i## in the example above live? I don't expect the commutator to be the one in the group.
  9. Jul 28, 2017 #8


    User Avatar
    Science Advisor
    Gold Member

    "Generator" can refer to any element of the Lie algebra, or to the image of that element under the representation map. Hence when one speaks of the "generator of a representation" of a Lie group, one is referring to the "representation of a generator in its Lie algebra"

    In my so(2) SO(2) example I use equality in defining the matrices but really the rotations are the actions on the points in the space an these matrices are just the representation based on a certain coordinate system. So my [itex] \eta[/itex] is the "generator of the representation" of the rotation group since it is the "representation of the generator" in the Lie algebra.

    In your example using the adjoint representation one is mapping from the Lie group and Lie algebra into the Operator algebra on the Lie algebra:
    [tex] Ad_g \subseteq GL(\mathfrak{g}) \subset Op(\mathfrak{g})[/tex]
    We need the operator algebra with both multiplication and addition to exponentiate the Lie algebra elements (generators) since that is defined via the power series expansion.

    Note that the representation of the Lie algebra within an associative algebra maps the Lie product to the commutator. The associative algebra is itself not a Lie algebra but it becomes one with the definition of the commutator product. So let's take the SO(3) so(3) example and I'm going to use upper/lower case instead of frak for the lie algebra to save me time typing.

    BEGIN DIGRESSION: I should add something from earlier. In the pure math the Lie algebra elements are the "generators" while in physics applications especially QM, the generators are the corresponding observables. In the classical canonical algebra (Lie product equals the Poisson bracket) this is not a distinction. However in QM the "generators" are the Hermitian observables which are identified with their anti-hermitian counterparts (obtained by mutiplication by i) which are the actual Lie algebra elements. Thus QM Physicist's generator * i = Mathematician's generator in the Lie algebra. (modulo a possible +/-1 depending on convention) END DIGRESSION

    SO(3) has Lie algebra so(3) which is a 3 dimensional vector space with Lie product we'll denote as [itex]\triangle[/itex]. Now this means we can say [itex]so(3) = (\mathbb{R}^3,\triangle)[/itex]
    The underlying space being [itex]\mathbb{R}^3[/itex] and we have the full matrix/operator algebra over it [itex]\mathbb{R}(3)=Op(\mathbb{R}^3)[/itex]. Taking the invertable elements within [itex]\mathbb{R}(3)[/itex] defines [itex]GL(3;\mathbb{R})[/itex]. Note that also taking the whole matrix algebra as a linear space with the commutator product then defines the Lie algebra [itex]gl(3;\mathbb{R})[/itex].

    The adjoint action of an element [itex]\Gamma[/itex] of the Lie algebra is the linear mapping [itex]\Gamma : \Gamma' \mapsto (\Gamma\triangle \Gamma')[/itex] as a linear mapping it will correspond to a matrix in [itex]\mathbb{R}(3)[/itex]. Let's call that matrix [itex]Ad(\Gamma)[/itex]. It is then these matrices whose commutator product will correspond to the Lie product:
    [itex] [Ad(\Gamma_1),Ad(\Gamma_2)] = Ad(\Gamma_1)\circ Ad( \Gamma_2) -Ad(\Gamma_2)\circ Ad(\Gamma_1) [/itex] will equal [itex]Ad(\Gamma_1\triangle\Gamma_2)[/itex].
    [edit]Added Ad()'s around some of the Gammas I left out.[/edit]
    This is saying the adjoint map is indeed a homomorphism of the Lie algebra and thus is a representation.

    Then the corresponding adjoint mapping of the 1-parameter subgroup elements generated by [itex]\Gamma[/itex] are the elements (parameterized by [itex]t[/itex]):
    [itex] Ad(g(t)) = \exp(t\cdot Ad(\Gamma)) =\boldsymbol{1} + \frac{t}{1!} Ad(\Gamma) + \frac{t^2}{2!}Ad(\Gamma)^2 + \cdots[/itex]

    Taking a basis of the Lie algebra [itex]\Gamma_1, \Gamma_2, \Gamma_3[/itex] (such as [itex]iJ_1, iJ_2, iJ_3[/itex]) you can then generate the whole SO(3) adjoint representation via:
    [itex] Ad( g(t_1,t_2,t_3) ) = \exp\left( t_1 Ad(\Gamma_2) + t_2 Ad(\Gamma_2) + t_3 Ad(\Gamma_3) \right) = \exp(\sum_k t_k Ad(\Gamma_k))[/itex]

    In short (the adjoint rep of) each group element is the exponential of some (not unique)(adj rep of) element of the Lie algebra (within the associative representation algebra).
  10. Jul 28, 2017 #9


    User Avatar
    Science Advisor
    Gold Member

    I'll add, I'm trying to be careful here to distinguish the Lie product from the commutator of the representation product as these are technically different objects in different algebras. However as you can see, this gets a bit awkward which is why (especially physicists who can get away with it more often) we typically simplify the exposition for specific cases by straight out identifying the groups and algebras with one of their representations. Then the commutator becomes the lie product itself instead of just a representation. We often trade between being concise and being precise.
  11. Jul 28, 2017 #10


    Staff: Mentor

    @Silviu Sorry, if you should interpret this as a thread hijack. But this different language between physics and mathematics, especially in Lie theory is so fundamental, that I think your confusion might in part be caused by it. (Non of my books on Lie theory ever uses the term generator, I think.)

    @jambaugh Thanks for the explanation, esp. the digression part. This generator terminology drives me crazy. I never know what is meant. And then physicist usually talk about the adjoint representation but rarely tell which one! Each time I read questions, even the pure algebraic ones, the hardest part is not to answer the question, but to figure out what is meant.
    ... which means ##\mathfrak{g}## and ##\mathfrak{gl}(V)##. The latter of which I suppose you mean by the operation algebra and the algebra where the "generator of a representation" lives in.
    ... where I assume the associative structure isn't important here, only to define the Lie multiplication.
    Yes. However, this isn't what causes my confusion. It is the mixture of "representation" and/or "adjoint". ##\;G \longrightarrow GL(V)## is completely different from ##\mathfrak{g} \longrightarrow \mathfrak{gl}(V)## and ##\operatorname{Ad}## different from ##\operatorname{ad}##. The fact they are related isn't helpful. I know they are related via the exponential map and the differentiation process, but it's usually hard to tell on which side the discussion takes place. Of course it's important to distinguish the various multiplications, but even more important to distinguish the manifold from its tangent space and which one's representation is meant (IMO). I always thought physicists mean the adjoint representation of the group, but the way how you explained "generator" I now think it's the algebra ##\operatorname{ad}(\mathfrak{g})##. This would make sense, as the representations of, e.g. ##\mathfrak{su}(2)## are well-known and I suspect them be the cause of half-spin particles.

    I wish there was a simple dictionary which translates the usage of "infinitesimal" (##\mathfrak{g}## or ##\mathfrak{gl}(\mathfrak{g})##), "generator" (groups have generators, vectors spaces have vectors), "representation" (##GL(V)## or ##\mathfrak{gl}(V)##) and "adjoint" (##\operatorname{Ad}## or ##\operatorname{ad}##) by simply naming the spaces. There are only a few: ##G\; , \;V\; , \;GL(V)\; , \;\mathfrak{g}\; , \;GL(\mathfrak{g})\; , \;\mathfrak{gl}(V)\, , \,\mathfrak{gl}(\mathfrak{g})##. The math is easy compared to the ambiguity of language here.

    I bookmarked this page as it is apparently the closest I'm able to get to such a dictionary, although especially the usage of adjoint is still a mystery to me, since both operate on the same vector space, and you additionally spoke of the inclusions ##\operatorname{Ad}G \subseteq GL(G) \subseteq \mathfrak{gl}(\mathfrak{g})## and ended up with a "generator" although the group representation is a group homomorphism, and not a Lie algebra homomorphism, as "generator" or "infinitesimal" would suggest.
  12. Jul 28, 2017 #11


    Staff: Mentor

    And back to the example of ##SU(2)##:

    For a matrix
    z &-\bar{w} \\ w & \bar{z}
    \end{bmatrix} \in SU(2,\mathbb{C})
    (i.e. ##|z|^2+|w|^2=1##) and with the Pauli matrices
    \sigma_1 = \begin{bmatrix}
    0 & 1 \\ 1 & 0
    \end{bmatrix}\, , \,
    \sigma_2 = \begin{bmatrix}
    0 & -i \\ i & 0
    \end{bmatrix}\, , \,
    \sigma_3 = \begin{bmatrix}
    1 & 0 \\ 0 & -1
    we can define a basis ##\mathfrak{B} = \{\mathfrak{e}_1,\mathfrak{e}_2,\mathfrak{e}_3\}## by ##\mathfrak{e}_k= i\cdot \sigma_k##
    of ##\mathfrak{su}_\mathbb{R}(2,\mathbb{C})## and write
    \operatorname{Ad}(g) = \begin{bmatrix}
    \mathfrak{Re}(z^2-w^2)&\mathfrak{Im}(z^2-w^2)&2\; \mathfrak{Re}(w \cdot \bar{z})\\[6pt]
    -\, \mathfrak{Im}(z^2+w^2)&\mathfrak{Re}(z^2+w^2)&2\; \mathfrak{Im}(w \cdot \bar{z})\\[6pt]
    -2\; \mathfrak{Re}(w \cdot z)&-2\; \mathfrak{Im}(w \cdot z)& |z|^2-|w|^2

    In this special case, the matrices of ##\mathfrak{B}## belong to the group ##SU(2,\mathbb{C})## as well as to its Lie algebra ##\mathfrak{su}(2,\mathbb{C})##, which is not true anymore for the Gell-Mann matrices and ##SU(3,\mathbb{C})##. If we scale ##\mathfrak{B}## by setting
    U = \frac{1}{2} \mathfrak{e}_1\, , \,V = \frac{1}{2} \mathfrak{e}_2\, , \,W = -\frac{1}{2} \mathfrak{e}_3
    we get an especially easy to remember Lie multiplication
    [U,V]=W\, , \,[V,W]=U\, , \,[W,U]=V
    because all products are achieved by a shift to the left and which defines the operation of ##\operatorname{ad}##.

    With this mathematical point of view, what are the ##J_k## and the ##V_i##?
    I assume ##\{U,V,W\} = \{V_1,V_2,V_3\}## but what are the rotations ##J## here?
  13. Jul 28, 2017 #12


    User Avatar
    Science Advisor
    Gold Member

    With respect to [itex]Ad[/itex] vs [itex]ad[/itex] yes I was describing the adjoint representation of the Lie algebra and its induced representation on the Lie group. There is, of course the adjoint representation of the Lie group itself wherein [itex] g: h \mapsto g^{-1}h g[/itex] which by itself does not exist within a linear algebra. It starts as a point mapping on the Lie algebra as a manifold and, being a continuous mapping will be an element of the diffeomorphism group of the manifold as a Lie group. One then has the Lie algebra as a tangent space and the induced representation mapping elements of the Lie algebra to a vector field over the Lie group thus identifying a given Lie algebra element with a Lie derivative on the manifold. One can generally tell from context which adjoint representation in that if there is any assertion of a linear representation there is an implication that one has started with the adjoint representation of the Lie algebra.

    This is also the one referred to as "the adjoint representation" when one is enumerating the linear representations of A series classical Lie groups/Lie algebras and their central extensions. The one corresponding to the (1,n - 1) partition (Young) diagram. (or is it the (2,1,1,1...,1) partition?)
  14. Jul 28, 2017 #13


    Staff: Mentor

    I always thought it's the other way around: conjugation ##\longrightarrow## ##\operatorname{Ad}_G## ##\longrightarrow## ##\operatorname{ad}_\mathfrak{g}##. Anyway, the difficulty is that both adjoints operate on the same vector space ##\mathfrak{g}## but it does make a difference, whether I represent a Lie group or a Lie algebra. The adjective "linear" isn't helpful either. Does it refer to the general linear group ##GL(V)## in which case it is a group representation, or to the general linear algebra ##\mathfrak{gl}(V)## in which case it is a Lie algebra representation? You see, I'm very confused. And it's not the mathematical part that confuses me. I can learn theorems and definitions, but I have the greatest difficulties to understand the language used by physicists. If they would use terms like vector fields, or at least tell WHAT they consider to be represented, this would be of great help. But to speak of, e.g. the "infinitesimal generator of the adjoint representation" is so hopelessly under determined in various aspects, that I usually stop reading it. And some authors find that it becomes clear, if they mention the Pauli matrices - the next joke. Maybe it's a kind of physicists' humor I cannot relate to. And I only want to know, which group exactly or which algebra exactly they are speaking about. Obviously it is a coding which is either historically motivated, sloppy or meant to demonstrate expertness. I would prefer exactness. And no, I do not mean ##SO(3) \longrightarrow SU(2) \longrightarrow SO(4)##, just to complete the confusion which "rotation" brings with it.
  15. Jul 28, 2017 #14


    User Avatar
    Science Advisor
    Gold Member

    I may be contributing to your confusion by being a bit unclear and sloppy myself.
    Every group has an adjoint representation as a function mapping the group to itself. That, even for Lie groups, has no direct "linear structure". It is a point mapping. For Lie groups it has differential geometric structure. I think that is what is usually denoted by [itex]\operatorname{Ad}[/itex].

    Now there is the adjoint representation of a Lie algebra: [itex]\operatorname{ad}_\mathfrak{g}[/itex]. When physicists says it is "linear" they mean several equivalent things.

    Firstly every Lie algebra can be embedded in its universal covering algebra which is an associative algebra within which the Lie product is a commutator of the associative product. The adjoint representation is linear in the sense that it extends to a representation of the universal covering algebra within the operator algebra over some linear space (= vector space). Most every time a physicist says "linear" the imply an embedding into a vector space or if multiplication is occurring embedding into the operator algebra on some space. In this adjoint representation it is the operator algebra over the Lie algebra as a linear space.
    (I like to say linear space rather than vector space since in many applications there is an implied metric structure on the vector spaces and "linear space" implies less.)

    In the operator algebra [itex]\operatorname{Op}(\mathbf{V})= \mathbf{V}\otimes\mathbf{V}^*[/itex] for some vector space [itex]\mathbf{V}[/itex] is the set of invertible operators which defines the Lie group [itex]\operatorname{GL}(\mathbf{V})[/itex]. There is also in the same operator algebra, via the commutator product, the corresponding general linear Lie algebra. Each is [itex]n^2[/itex] dimensional ([itex]n = \mathop{dim}(\mathbf{V})[/itex].

    Thus for the Lie algebra [itex]\mathfrak{g} = (\mathbf{V},\triangle)[/itex] ( = vector space with a Lie product):
    [tex] \operatorname{ad}: \mathfrak{g} \to \mathfrak{gl}(\mathbf{V}) = (\operatorname{Op}(\mathbf{V}) , [\cdot,\cdot] ) [/tex]
    (This is the 2ndly part, mapping in to the general linear group.)

    The "thirdly" part refers to the induced group representation which I will here denote as [itex]\operatorname{ad}_G[/itex]. It is the extension of this representation of the Lie algebra (within the associative operator algebra) via the exponential mapping. Note that all of these representations are acting on the vector space [itex]\mathbf{V}[/itex]. Group products are the operator product, Lie products are the commutator product.

    Here is the commutative diagram:

    This picture is valid for any linear representation if you relabel the rep mappings and generalize [itex] \mathbf{V}[/itex]. [See Footnote]

    Now on top of all this, we can treat the operator algebra as a vector space and thus as a manifold. The general linear group is a sub-manifold and the group adjoint mapping is a continuous point map. The first adjoint mapping I mentioned in this post, where the group acts on itself as a point mapping is here extended to the action of the group on the whole algebra as a linear space via the adjoint action:
    [itex]g: X\mapsto \operatorname{ad}_G(g) X \operatorname{ad}_G(g^{-1})[/itex]
    for [itex]X[/itex] some element in the operator algebra.

    When restricting the this adjoint action to act only on representatives of the group it expresses the adjoint rep [itex]\operatorname{Ad}_G[/itex] embedded in a larger linear representation [itex]\operatorname{GL}(\mathbf{V}\otimes\mathbf{V}^*)[/itex]. In most physics applications the speaker/author will specify this representation by using the term adjoint action. The point here is that, though we can construct a linear "re-representation" of the [imath]\operatorname{Ad}[/itex]joint representation it is not, in-and-of-itself, a linear representation.

    A final comment about the "Ad" adjoint representation of the Lie group. Since the adjoint action of any one element on the whole of the Lie group is a Lie group automorphism, this big A adjoint representation is a homomorphic mapping from the group into its automorphism group [itex]\operatorname{Ad}: G \to \mathbf{\mathop{Aut}}(G)[/itex]. These actions are thus called inner automorphisms and that's the best way to think about this big A adjoint rep. There may also be additional (outer) automorphisms (though not for simple groups) and that's a fun topic for another thread.

    I hope my exposition has been clear enough to help resolve your issues. Let me know if/where I've fallen short on that.

    [Footnote] There is a slight caveat when generalizing though as the induced adjoint map on the group may be a projective representation having multiple representatives of the identity elements. The obvious example being spin representations of the orthogonal groups which are not technically speaking not quite representations of the Lie groups.

    Attached Files:

  16. Jul 28, 2017 #15


    Staff: Mentor

    This is a point where I have respectfully to disagree. Every group has a representation on itself via conjugation: ##g \mapsto (x \mapsto gxg^{-1})##. But this is not the adjoint representation. The adjoint representation of Lie Groups is induced by this conjugation, but it is given by ##g \mapsto (X \mapsto gXg^{-1})##, i.e. ##G## operates on its Lie algebra ##\mathfrak{g}##, not on itself. So it is in a sense also a conjugation but on the vector space ##\mathfrak{g}##. And it is linear in ##X##.

    Yes, but ##\operatorname{Ad}(g)(X)=gXg^{-1}## is the operation on ##\mathfrak{g}##, not the inner automorphism of ##G##. The conjugation induces the adjoint representation ##\operatorname{Ad}##, which is a bit of work to show for left-invariant vector fields in general and usually easy to see for given matrix groups. Here is the beginning of all the mess. We have a representation ##G \longrightarrow GL(G)## by conjugation and induced by this a representation ##G \longrightarrow GL(\mathfrak{g})## which is called adjoint ##\operatorname{Ad}##.

    Yes, it is basically the left multiplication in the Lie algebra and related to the adjoint representation of the Lie group by ##\exp \operatorname{ad} =\operatorname{Ad}\exp##. The mappings ##\operatorname{ad}X## are also called inner derivations for they resulted from inner automorphisms. In the cases that are physically interesting, we have in addition ##\mathfrak{g}\cong \operatorname{ad}(\mathfrak{g})## which doesn't make it easier to distinguish everything.
    O.k. this might make sense, as I've recently read here "a vector space has an inner product but not necessarily a norm". I only thought: well, maybe for a physicist. The name I know for what you call universal covering algebra is universal enveloping algebra. Does this play a role to clarify what generators are? Is it ever used outside a proof for PBW? As we started with matrix groups anyway, I think we don't need it here.

    Yes, and ##V = \mathfrak{g}## in case of the adjoint representation of the Lie algebra, because it is simply the Lie multiplication.
    What you call ##\operatorname{ad}_G## is known to me as ##\operatorname{Ad}_G## and the conjugation has no reserved notation; often the elements are written as ##\iota_g##.
    Beside our discrepancy on conjugations, I think I'm a bit lost with the operation algebra. As I understood it, you mean the associative algebra of linear mappings on a vector space ##V## which also carries a Lie structure by the commutator. This is the Lie algebra of ##GL(V)##, the regular linear mappings which is a Lie group and therefore a manifold.

    Unfortunately yes. I certainly don't want to argue about whether inner automorphisms should be named adjoint or not, although I've never heard or read it, and it would make the matter even more confusing. But I still don't know if ## \operatorname{Ad}_g \, : \,X \mapsto gXg^{-1}\; , \;g\in G, X \in \mathfrak{g}## is called "generator of the adjoint representation", or if it is ##\operatorname{ad}_\mathfrak{g}X \, : \, Y \mapsto [X,Y]\; , \;X,Y \in \mathfrak{g}##. And similar, if we drop the requirement "adjoint", whether a generator of a representation ##(V,\varphi)## is understood as an element of ##\varphi(g) \in GL(V)## for a group representation or of ##\varphi(X) \in \mathfrak{gl}(V)## of a Lie algebra representation. And at last, if we also drop the word representation, whether we can talk about generators at all and simply mean the Lie algebra vectors.
    As I first understood you, it is always the second case: generators have to be vectors.
  17. Jul 29, 2017 #16


    User Avatar
    Science Advisor
    Gold Member

    That is correct.

    where ever you read that, don't read any more. A vector space is just a linear space any set closed under the act of taking linear combinations and with the usual algebraic properties (associativities, distributive laws, etc.). To say it has a norm we call it a "normed space" to say it has an inner product we say it is an "inner product space". But in most usages Vector Space = Linear Space. My use of "Linear" is just to emphasize one should not assume any prior norms or inner products. But I use "Linear Space" and "Vector Space" interchangeably.

    Right, my bad. The correct term is "enveloping" and I was mis-associating this idea with something called the "universal covering group" which is something totally different.

    You should first and foremost dispel the belief that there is a single universal convention of notation and terminology to which everyone either correctly adheres or are mistaken in deviating. This is not to say I couldn't be misrepresenting a well established convention. I always keep in mind the operational meaning and then, when communicating I will define my terms if I feel they are not universally accepted. Likewise when reading another's work I will double check that when they say "apple" I understand how they're using the term.
    When I was typing the previous post I consulted the wiki page
    on the adjoint representation of a Lie group . [itex] G \to \mathop{Aut}(G)[/itex] via the "adjoint action" or "conjugate" action.
    And that also is what I recall as the adjoint representation of a finite group.

    The term "adjoint" is quite polymorphic in mathematics and mathematical physics so when used you should double check the text/author for how they mean it. I agree it can be confusing but the first level of confusion is quickly dispelled by NOT assuming words have fixed meaning, even in the rigors of mathematics. However they variations in meaning are connected which I tried to convey by their all being embedded in an operator algebra.

    Let me say that in terms of the use of the word "generator", generators generate. So in pure mathematics you sometimes have, for example, a sub-set of algebraic elements out of which all other elements can be constructed by application of operations. Thus there are the generators of a group... and that tells you how many symbols are needed to express all group elements. These elements "generate" the group.

    In the context of Lie algebras and Lie groups, each element of the Lie algebra "generates" a 1 parameter subgroup of the corresponding Lie group via its exponentiation (within any representation). This is why it is referred to as a generator, especially by the physicists who are concerned with e.g. continuous symmetries and the conserved Noether charges associated with them.

    A final comment. I tried to point out that by looking at the associative algebra you will find the "adjoint action" is applicable on the algebra as a whole, on those elements we identify with representatives of the group's Lie algebra, and on those representatives of the Lie groups elements. But to my mind this is not where the Adjoint=[itex]G\to\mathop{Aut}(G)[/itex] representation lives. Rather that is again a re-representation. But given it is a nice one for working with concrete forms (matrices) I can see where the early student might "put the cart before the horse" and identify here in the matrix algebra what is really a shadow of the named rep.

    Keep in mind there are three levels of hierarchy here:
    • Algebraic structures which can be expressed in terms of an operator algebra (thus they have linear representations).
    • Operator algebras which are defined to be the endomorphisms of a linear (vector) space, (and more generally spaces of linear mappings between two spaces).
    • Matrix algebra which, while an algebra in its own right, is utilized to "re-represent" in a basis specific way the elements of a vector space, its operator algebra, (and general linear mappings between vector spaces) in terms of components.
    The matrices are not (necessarily) themselves the operator algebras they represent, The operator algebras are not (necessarily) the algebraic structures they are used to represent.

    There can be a slight tendency of the student to remain on or revert to the matrix level. This is understandable because it is where one works with concrete example typically. Thus, for example, while an early student's definition of GL(n) would be the group of invertible n by n matrices, the "truer" definition is that it is the group of automorphism of an n-dimensional real vector space (all spaces of the same dimension being isomorphic). (sans metric = inner product <--> norm). The group itself is defined independent of matrices. Likewise O(n) is the automorphism group of a real n-dimensional inner product space. U(n) is the group of automorphisms of an n-dimensional Hilbert space.
  18. Jul 29, 2017 #17


    Staff: Mentor

    From your Wikipedia quotation about the adjoint representation:
    This means it is the representation on ##\mathfrak{g}##, and not as you repeatedly and in my opinion wrongly claim the group conjugation, because the group, even as matrix group isn't a vector space. As I said, I don't want to argue, for I'm convinced, that the statement "group conjugation is called adjoint representation" is not true. Perhaps you call it as such but you're the first I've ever seen. I only tried to figure out the meaning of the terminology physicists use. On the mathematical side I still prefer Humphreys and Varadarajan over the internet.
  19. Jul 30, 2017 #18


    User Avatar
    Science Advisor
    Gold Member

    I agree, and I see now that I misread the wiki and other sources, missing the fact that it is the differential of the "representation" to which I referred, which defines ##\mathop{Ad}##.
    Yes, thank you for your persistence. I have to admit my error on this, which I believe is an improper extrapolation of the "adjoint action" which I've grown used to using for the conjugate action.
    I agree totally... (although had I read more carefully ...)

    A few final points though. I've been using "representation" in a more generic sense that some refer to as a "realization" and thence "linear representation" is not redundant in my mind but is rather a qualifier that that representation maps to the automorphism group of a vector space. I know I am not alone in this usage but it is not the majority convention.
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook

Have something to add?
Draft saved Draft deleted

Similar Discussions: SU(2) generators
  1. SU 2 and SU 3 (Replies: 5)

  2. SU(2) matrices (Replies: 3)