mathoperators

How to Tell Operations, Operators, Functionals and Representations Apart

[Total: 17    Average: 3.7/5]
All these concepts belong to the toolbox of physicists. I read them quite often on our forum and their usage is sometimes a bit confused. Physicists learn how to apply them, but occasionally I get the impression, that the concepts behind are forgotten. So what are they? Especially when it comes to the adjoint representation in quantum field theory it is rarely distinguished which one is meant: the Lie group or the Lie algebra. Both have one and it is not the same! But why do they have an identical name? And what has it to do with operations? They are used like basic arithmetic is used. However, there is more to them, than only arithmetic. I have provided a list of textbooks at the end of this article, because this terminology is so fundamental to physics and of an enormous importance. The books can serve as good companions on these subjects throughout a lifetime, and you will more than once pick them from the shelf to read some chapters or look up important definitions and theorems. 1)

Arithmetic Operations and the division by zero

Everybody knows the basic arithmetic operators addition, subtraction, multiplication and division. They connect two numbers and form a new one. Let’s consider the real numbers. We basically have two groups, one which we write with a ##+## sign and one with a ##\cdot ## sign. Group here means we have associativity, a neutral and an inverse element. The multiplicative group does not contain ##0##, so the question how to divide by zero doesn’t even arise at this level. Let’s write these two groups ##G_+## and ##G_*##. Now we all know how to mix these two operations. \begin{equation}\label{AO-I}
\begin{aligned}
G_* \times G_+ &\longrightarrow G_+ \\
(r,p) &\longmapsto r\cdot p
\end{aligned}
\end{equation}
There is a certain property which tells us how ##G_*## has to handle the group structure of ##G_+##
$$
(r,p+q) = (r,p) + (r,q) = r\cdot p + r \cdot q
$$
We call it the distributive law. It is actually an example of an operation of ##G_*## on ##G_+\,##. Here we have to define, how elements ##r \in G_*## deal with ##0 \in G_+##. The distributive law forces us to define ##r\cdot 0 = 0\,##. The interesting point is: There is still no need to even think about the division by zero. It simply does not occur in the concept. Strictly speaking, even ##0 \cdot 0## doesn’t occur. And if we’re honest, it isn’t really needed at all. But ##0 \cdot 0 = 0## is forced by the distributive law, too. Of course this is a rather algebraic point of view, but don’t we call these operations on real numbers algebra?

Other functions like $$(b,a) \longmapsto \log_b a\; , \; (a,n \longmapsto a^n)\; , \; (n,r)\longmapsto \sqrt[n]{r}\; , \;(n,k)\longmapsto \binom{n}{k}$$
can also be viewed as operations with certain rules.

Linear and Non-Linear Operators, Functionals and Hilbert spaces

Operators are subject to functional analysis 2) . The terms are a bit confusing:

Operators are functions, functionals are certain operators, and functional analysis deals with operators on spaces, where the elements are functions.

O.k. so far? Let’s get some order in this mess. We start with the linear case.

Linear means we consider vector spaces over fields. Historically and from the point of view by applications, those fields are in general ##\mathbb{R}## or ##\mathbb{C}##. These are the most important ones and those which are well suited to describe physical processes. The basic difference to linear algebra is, that our vectors are functions themselves: continuous functions, differentiable functions, bounded functions or sometimes sequences. It’s obvious that those vector spaces are generally not finite dimensional. However, they are so important, that we gave them names:

A real or complex vector space with a dot product ##\langle x,y \rangle## (scalar product, inner product) which induces a norm by ##||x|| = \sqrt{\langle x,x \rangle}## is called a Pre-Hilbert space and Hilbert space if it is complete, i.e. all Cauchy sequences converge. The latter means, if elements of a sequence get closer and closer to each other, then we can find a limit. This is the difference between the rational and the real numbers: we can find sequences of rationals, which get closer and closer towards ##\sqrt{2}## but this point doesn’t belong to the rational numbers. In the real numbers, it does. Therefore we call the real numbers complete and the rationals not. If we drop the requirement of a dot product, and only want to consider a complete real or complex vector space with a norm, then we call it Banach space. Important examples are Lebesgue spaces, which deal with measurable subsets of ##\mathbb{R}^n##. 3)  4) 

Now a linear operator ##L## is simply a linear map between vector spaces: ##L : V \rightarrow W##. The elements of ##V## on which ##L## is defined is the domain of ##L## and its image, i.e. the elements of ##W## which are hit by ##L## is the range or image of ##L## in its codomain ##W##. The only difference to linear algebra is, that if we say operator, then we usually mean (infinite dimensional) Pre-Hilbert, Hilbert, or Banach spaces. 5)  Certain classes of functions as those mentioned above form those vector spaces. If ##W## happens to be ##\mathbb{R}## or ##\mathbb{C}## then our operator ##L## is often called a functional. So in contrast to operations, where we have a pair of input parameters, we only have functions between vector spaces. The generally infinite dimension of the vector spaces, however, can make quite a difference for theorems and methods used in comparison to basic linear algebra. Also the scalar fields considered here are the real or complex numbers, which also is a difference to a purely algebraic point of view with eventually finite fields.

The main development of the algebra of the infinite was achieved in the 19th and early 20th century: “And, however unbelievable this may seem to us, it took quite a long time until it has been clear to mathematicians, that what the algebraists write as ##(I-\lambda L)^{-1}## for a matrix ##L##, is essentially the same as the analysts represent by ##I+\lambda L + \lambda^2 L^2 + \ldots ## for a linear operator ##L##.” 6) 

In the non-linear case 7) , operators are (non-linear) functions between normed (in general infinite dimensional) vector spaces (Banach spaces), and (non-linear) functionals are those which map to ##\mathbb{R}## or ##\mathbb{C}##.

Important operators in physics are (among others) the density operator, the position operator, the momentum operator or the Hamiltonian (Hamilton operator) or simply the differential ##\frac{d}{dx}##, the Volterra operator ##\int_0^x dt ## or the gradient ##\nabla##. Sometimes entire classes of operators are considered, e.g. compact operators, which map bounded sets to those whose closure is compact.

Operations and Representations

The adjoint case

Here we are by real operations again, where the elements of one object (e.g. groups, rings, fields or algebras) transform elements of another object (e.g. sets, vector spaces, modules). This means we have an operation
\begin{equation}\label{OR-I}
\begin{aligned}
G \times V &\longrightarrow V \\
(g,v) &\longmapsto g.v
\end{aligned}
\end{equation}
A common example are matrix groups ##G## and vector spaces ##V## 8)  where the operation is the application of the transformation represented by the matrix. A orthogonal matrix (representing a rotation)
\begin{equation}\label{OR-II}
\begin{aligned}
g = \begin{bmatrix}
\cos \varphi & -\sin \varphi \\
\sin \varphi & \cos \varphi
\end{bmatrix}
\end{aligned}
\end{equation}
transforms a two dimensional vector in its by ##\varphi## rotated version. Let’s consider – for the sake of simplicity – this example, i.e. ##G=SO(2,\mathbb{R})## is the group of rotations in the plane ##V=\mathbb{R}^2##. This is the major reason why operations are considered: Can we find out something either about ##G## or about ##V## by means of an operation, which we otherwise would have difficulties to examine? E.g. the operation of certain matrices on vector spaces are used to describe the spin of fermions. Thus as we did in our first example, we always have to tell, how the operating elements (##g \in G##) handle the structure of the set they operate on (##v \in V##). Of course only if there is a structure. In our example the operation respects the vector space structure
\begin{equation}\label{OR-III}
\begin{aligned}
g.(\lambda v + \mu w) = \lambda (g.v) + \mu (g.w)
\end{aligned}
\end{equation}
It means, it doesn’t matter if we rotate the result of an operation of vectors, or perform the operation after we rotated the components. The operation respects linearity and is called a linear operation. We will see that similar is true, if ##V## carries other structures as, e.g. a Lie algebra structure. Usually we require the operation to have the properties inherited by the nature of objects we deal with: linear operations on vector spaces, isometric operations on geometric objects, continuous (or differentiable) operations on continuous (or differentiable) functions, smooth operations on smooth manifolds and so on. But in any case we have to define how the objects have to be handled, especially their structure.

Important elements of an operation are orbits and stabilizers. An orbit of ##v\in V## is the set $$G.v = \{g.v \in V \,\vert \,g \in G\}$$
It is the set of all elements of ##V## which can be reached by the operation. In case of groups ##G##, orbits are equivalence classes. If we can reach every point ##w## form any point ##v## by a certain group element, i.e. ##w \in G.v## for all ##v,w \in V\,,## then the operation is called transitive. Somehow corresponding to orbits are stabilizers, which are all elements of ##G## which leave a given element of ##V## unchanged, i.e. $$G_v = \{g \in G\,\vert \, g.v=v\}$$
In case of groups, if only the neutral element stabilizes elements, i.e. ##G_v=\{e\}## for all ##v\in V##, the operation is called free. ##G## operates freely on ##V##. If only the neutral element ##e## fixes all ##v##, the operation is called faithful or injective. Free operations on non-empty sets are faithful.

Let us consider our example and take ##v=(1,-2)## and ##w=(1,1)##. Then the orbit of ##v## is a circle with radius ##\sqrt{5}## and ##w## cannot be reached from ##v## by rotation, which means our operation is not transitive. We also see, that an orbit doesn’t need to be a subspace. However, concentric circles are equivalence classes.

For the stabilizer, which is always a subgroup, we will have to be careful with the definition of ##G##. If we restrict ourselves to values ##\varphi \in [0,2 \pi) ## then we get a free operation, as only the rotation by ##0## stabilizes elements. But if we allow any real number as an angle, then ##G_v= 2\pi \mathbb{Z}##
If we generally consider a group ##G## which operates on a vector space ##V## we usually require the property (in this case being a group) to be respected. This means ##e.v=e## and ##g.(h.v) =(gh).v## and ##g^{-1}.g.v=v##. Now these properties can be summarized by saying
\begin{equation}\label{OR-IV}
\begin{aligned}
\varphi \, : \, G &\longrightarrow GL(V) \\
\varphi \, : \, g &\longmapsto (v \longmapsto g.v)
\end{aligned}
\end{equation}
is a group homomorphism, where ##GL(V)## is the general linear group of ##V##, the group of all regular linear functions ##V \longrightarrow V##. Homomorphism means, ##\varphi## maps group to group and ##\varphi(g\cdot h)=\varphi(g) \cdot \varphi(h)##. In this case ##V## is called representation space and ##\varphi## a representation of ##G##. Thus an operation and a representation are the same thing: it’s only a different way to look at it. One emphasizes the group side of it, the other the vector space side. 9) 10) 

Let me finish with three important examples of representations which play a crucial role in the standard model of particle physics. Therefore let ##G## be a matrix group, e.g. a Lie group like the unitary group and ##V## a vector space where this group applies to, e.g. ##\mathbb{C}^n##.

The first example is a pure group operation on itself
\begin{equation}\label{V}
\begin{aligned}
G \times G &\longrightarrow G\\
g.h &\longmapsto (ghg^{-1})
\end{aligned}
\end{equation}
Here ##G## operates not on a vector space, but on itself instead. A group element ##g## defines a bijective map from ##G## to ##G##, which is called conjugation or inner automorphism. If there is a Lie algebra ##\mathfrak{g}## associated with ##G##, as for matrix groups and in case of the unitary group, the skew-Hermitian matrices, we get from this conjugation a naturally induced map ##(g\in G\, , \,X\in \mathfrak{g})##
\begin{equation}\label{VI}
\begin{aligned}
\operatorname{Ad}\, : \,G &\longrightarrow GL(\mathfrak{g})\\
Ad(g)(X)&=gXg^{-1}
\end{aligned}
\end{equation}
which is called adjoint represenatation of ##G##.11)
Its representation space is now the Lie algebra ##\mathfrak{g}\,##, which is the tangent space of smooth functions in ##G## at the neutral element, the identity matrix, and as such a vector space. It further means, there is a group homomorphism ##\operatorname{Ad}## of matrix groups (Lie groups). Therefore ##GL(\mathfrak{g})## has also an associated Lie algebra, called the general linear Lie algebra ##\mathfrak{gl(g)}\,##, which is basically nothing else as all square (not necessarily regular) matrices the size of ##\operatorname{dim}\mathfrak{g}\,##. The Lie algebra multiplication is given by the commutator
$$
[X,Y] = X \cdot Y – Y \cdot X
$$
The left multiplication in ##\mathfrak{g}## gives rise to a Lie algebra operation
\begin{equation}\label{VII}
\begin{aligned}
\varphi\, : \,\mathfrak{g} \times \mathfrak{g} & \longrightarrow \mathfrak{g}\\
(X,Y) &\longmapsto [X,Y]
\end{aligned}
\end{equation}
Remember that we said an operation is required to respect the structure. Here it means, that ##\varphi## is a Lie algebra homomorphism
$$
\varphi([X,Y])(Z) = [\varphi(X),\varphi(Y)](Z)=\varphi(X)\varphi(Y)(Z)-\varphi(Y)\varphi(X)(Z)
$$
which is nothing else than the Jacobi identity. Furthermore as an operation is always a representation, we get a representation
\begin{equation}\label{VIII}
\begin{aligned}
\operatorname{ad}\, : \,\mathfrak{g}&\longrightarrow \mathfrak{gl(g)}\\
\operatorname{ad}(X)(Y)& = \varphi(X,Y) = [X,Y]
\end{aligned}
\end{equation}
which is called adjoint representation of ##\mathfrak{g}\,##.12) It’s simply the left-multiplication in the Lie algebra. The homomorphism property, which is the Jacobi identity, is also the defining property of a derivation:
$$
\operatorname{ad}([X,Y])= [\operatorname{ad}(X),Y]+[X,\operatorname{ad}(Y)]
$$
Therefore the adjoint representation ##\operatorname{ad}## is also called an inner derivation of ##\mathfrak{g}\,##; the same as a conjugation is called an inner automorphism of ##G##. Both adjoint representations, the one of ##G## and the one of ##\mathfrak{g}## are related by the following formula ##(X \in \mathfrak{g})##
\begin{equation}\label{IX}
\begin{aligned}
\operatorname{Ad}(\exp(X)) = \exp(\operatorname{ad}(X))
\end{aligned}
\end{equation}

The defining property for a derivation is just the Leibniz rule of differentiation (product rule). Also closely related are the Lie derivative and the Levi-Civita connection. In the end all of them are just versions of the Leibniz rule we learned at school.

Summary

  • Arithmetic operations: ##+\; , \;-\; , \;\cdot \; , \; :##
  • (Linear) Operators: ##L : (V,||.||_V) \longrightarrow (W,||.||_W)##
  • (Linear) Functionals: ##L : (V,||.||_V) \longrightarrow (\mathbb{R},|.|_\mathbb{R}) ## or ##(\mathbb{C},|.|_\mathbb{C}) ##
  • Operation in general: ##Operator\,.\,Object_{old} = Object_{new}##
  • Group Operations: ##G \times V \longrightarrow V##
  • Group Representation: ##G \longrightarrow GL(V)##
  • Conjugation in groups: ##g.h = ghg^{-1}##
  • Adjoint representation for Lie groups: ##\operatorname{Ad} g(X) = gXg^{-1}##
  • Adjoint representation for Lie algebras: ##\operatorname{ad} X(Y) = [X,Y]##
  • ##\operatorname{Ad}\circ \exp = \exp \circ \operatorname{ad}##

Sources

Sources

 1) I know, if you bought all the books recommended here, it would be to the expense of a little fortune. However, all of them represent valuable sources which can be taken as the foundation of a good personal library. Especially Humphreys and Zeidler are well suited for beginners. ##\uparrow##

 2) E. Zeidler: Applied Functional Analysis: Main Principles and Their Applications, Springer, 1995, AMS 109

https://www.amazon.de/Applied-Functional-Analysis-Applications-Mathematical/dp/0387944222 ##\uparrow##

3) E. Hewitt, K. Stromberg: Real and Abstract Analysis, Springer 1965, GTM 25

https://www.amazon.com/Abstract-Analysis-Graduate-Texts-Mathematics/dp/0387901388/ ##\uparrow##

 4) J. Weidmann: Linear Operators in Hilbert Spaces, Springer, 1980, GTM 68

https://www.amazon.com/Linear-Operators-Hilbert-Graduate-Mathematics/dp/0387904271/ ##\uparrow##

 5) M. Reed, B. Simon: Functional Analysis, AP, 1981, Methods of Modern Mathematical Physics, Volume 1

https://www.amazon.de/Methods-modern-mathematical-physics-Functional/dp/0125850506 ##\uparrow##

 6) Jean Dieudonné, Geschichte der Mathematik 1700-1900, Vieweg Verlag 1985 ##\uparrow##

 7) E. Zeidler: Nonlinear Functional Analysis and its Applications, Springer, 1985-1990, Volumes: ##\uparrow##

 8) J.E. Humphreys: Linear Algebraic Groups, Springer, 1981, GTM 21

https://www.amazon.com/Linear-Algebraic-Groups-Graduate-Mathematics/dp/0387901086/ ##\uparrow##

 9) B.L. van der Waerden: Algebra Volume I, Springer, 1991

https://www.amazon.com/Algebra-I-B-L-van-Waerden/dp/0387406247 ##\uparrow##

 10) R. Lidl, G. Pilz: Applied Abstract Algebra, Springer, 1998, UTM

https://www.amazon.com/Applied-Abstract-Algebra-Undergraduate-Mathematics/dp/0387982906/ ##\uparrow##

 11) V.S. Varadarajan: Lie Groups, Lie Algebras, and Their Representations, Springer, 1984, GTM 102

https://www.amazon.com/Groups-Algebras-Representation-Graduate-Mathematics/dp/0387909699 ##\uparrow##

 12) J.E. Humphreys: Introduction to Lie Algebras and Representation Theory, Springer, 1972, GTM 9

https://www.amazon.com/Introduction-Algebras-Representation-Graduate-Mathematics/dp/0387900535 ##\uparrow##

 13) https://www.physicsforums.com/insights/representations-precision-important/


 

24 replies
« Older Comments
  1. samalkhaiat
    samalkhaiat says:
    fresh_42

    …..
    And this is the point where this "infinitesimal generator" terminology loses me:

    In physics, an infinitesimal generator (mostly of a symmetry group) is related to what mathematicians call a derivation or vector field. We always speak of infinitesimal generators of a group and its representations. So, in this sense [itex]mathfrak{g}[/itex] is the infinitesimal generator of [itex]G[/itex], [itex]mbox{ad}[/itex] is the infinitesimal generator of [itex]mbox{Ad}[/itex], and more generally, for simply connected Lie group [itex]G[/itex], [itex]mbox{d}pi[/itex] is the infinitesimal generator of [itex]G[/itex] in the representation [itex]pi[/itex].

    The group of automorphisms ( [itex]t to alpha_{t}[/itex]) of a [itex]C^{*}[/itex]-algebra, [itex]mathcal{A}[/itex], has an infinitesimal generator given by [tex]delta (A) = lim_{t to 0} frac{alpha_{t}(A) – A}{t} ,[/tex] with natural domain [tex]D(delta) ={ forall A in mathcal{A} | lim_{t to 0}t^{-1}(alpha_{t}(A) – A) mbox{exist} } .[/tex]

    Clearly [itex]delta (A)[/itex] is a *-derivation, i.e., linear operation which commutes with complex conjugation/adjoint, [itex]left(delta (A) right)^{*} = delta ( A^{*})[/itex], and satisfies the Leibnitz rule. Indeed, writing [tex]frac{1}{t}left( alpha_{t}(AB) – ABright) equiv frac{1}{t}left( alpha_{t}(A) – Aright) cdot B + alpha_{t}(A) cdot frac{1}{t} left( alpha_{t}(B) – Bright) ,[/tex] and taking the limit, we get [tex]delta(AB) = delta (A) cdot B + A cdot delta (B) .[/tex]

    In QM, the algebra [itex]mathcal{A}[/itex] is given by [itex]B(mathcal{H})[/itex], the algebra of bounded operators on separable Hilbert space. In this case one can show that for every 1-parameter group of automorphisms of [itex]B(mathcal{H})[/itex] (continuous in the weak operator topology), there exists a 1-parameter group of unitary operators [itex]U(t) = e^{iHt}[/itex] (continuous in the weak topology) such that [tex]alpha_{t}(A) equiv A(t) = e^{iHt} A e^{-iHt} .[/tex] Here, we have the so-called inner derivation [tex]delta_{H}(A) = i[H , A],[/tex] with domain [tex]D(delta_{H}) = { forall A in B(mathcal{H}) | [H,A] mbox{is dense} } .[/tex] When the derivation [itex]delta[/itex] is inner derivation [itex]delta_{X}[/itex], we call [itex]X[/itex] the infinitesimal generator. The same extends to every representation [itex]rho[/itex] of a [itex]C^{*}[/itex]-algebra on the Hilbert [itex]mathcal{H}_{rho}[/itex], [tex]rho(delta_{H}(A)) = i [H_{rho} , rho (A)], rho (A) in D(delta) .[/tex] In this case, we call [itex]H_{rho}[/itex] the infinitesimal generator of the group of automorphisms in the representation [itex]rho[/itex].

  2. fresh_42
    fresh_42 says:
    samalkhaiat

    In physics, an infinitesimal generator (mostly of a symmetry group) is related to what mathematicians call a derivation or vector field. We always speak of infinitesimal generators of a group and its representations….

    Thank you. It drove me nuts what exactly a generator is meant to be. On all occasions I have seen it here, nobody ever explained it and in 90% of the cases I got the impression, that the poster himself wasn't sure. I have a 400 pages thick book about Lie groups, but Varadarajan doesn't mention "generator" at all. Maybe in the context of one-parameter subgroups, not quite sure. I bookmarked your post so I can look it up for details next time I'll try to understand someone speaking of generators or quote it as reference here. But I think I've understood the point. I'm coming from the algebraic side of it, and in this contexts, they are simply vectors and what matters alone is, in which vector space, algebra, vector field or bundle. An information which is basically never given explicitly.

    Maybe I'll also ask you about the ##i## which mathematically turns a homomorphism into an anti-homomorphism but nobody ever cares. I understand that it doesn't change representations qualitatively, nevertheless I find it a bit sloppy, e.g. when Pauli matrices are used.

  3. samalkhaiat
    samalkhaiat says:
    fresh_42

    Maybe I'll also ask you about the ##i## which mathematically turns a homomorphism into an anti-homomorphism but nobody ever cares. I understand that it doesn't change representations qualitatively, nevertheless I find it a bit sloppy, e.g. when Pauli matrices are used.

    When [itex]G[/itex] is a (Lie) symmetry group of the physical system, the dynamical variables of the system transform in some unitary representation [itex]rho[/itex] of [itex]G[/itex], and the “infinitesimal generators” in the representation [itex]rho[/itex] are (either directly or indirectly via Noether theorem) represent the observables of the system, i.e., the real quantities (numbers) that we measure in the Lab. In quantum mechanics, observables are represented by Hermitian operators/matrices. So, we need those generators to be Hermitian not the anti-hermitian that are used in the mathematical literatures: In some coordinates [itex]alpha^{a}[/itex] on the group manifold we may use the exponential map and write [tex]rho(g) = e^{alpha^{a} left( d rho (X_{a})right)} .[/tex] But [itex]d rho (X_{a})[/itex] is not good for us, so we instead define the hermitian generators by [tex]i d rho (X_{a}) = J_{a} ,[/tex] and rewrite [tex]rho (g) = e^{- i alpha^{a} J_{a}} .[/tex] In field theory, Noether theorem dresses these [itex]J_{a}[/itex]’s by the dynamical pair [itex](varphi^{r} , pi_{r})[/itex], giving us the so-called Noether charges. These are Lorentz scalar and time-independent (i.e., conserved) objects given by [tex]Q_{a} = int_{mathbb{R}^{3}} d^{3}x pi_{r} (x) J_{a} varphi^{r}(x) .[/tex] In this case, and to annoy you more, we actually call the Noether charges the generators of the symmetry group of the transformations in the unitary representation [itex]rho (G)[/itex]. The reason for term infinitesimal generators is natural in this case because these [itex]Q_{a}[/itex]’s do actually generate the correct infinitesimal transformations on the dynamical variables [tex]delta_{a}varphi^{r} (x) = [i Q_{a} , varphi^{r}(x)] ,[/tex] and satisfy the Lie algebra of the group [itex]G[/itex] [tex][Q_{a} , Q_{b}] = i C_{ab}{}^{c} Q_{c} .[/tex]

« Older Comments

Leave a Reply

Want to join the discussion?
Feel free to contribute!

Leave a Reply