# How to Tell Operations, Operators, Functionals and Representations Apart

[Total: 17    Average: 3.7/5]
All these concepts belong to the toolbox of physicists. I read them quite often on our forum and their usage is sometimes a bit confused. Physicists learn how to apply them, but occasionally I get the impression, that the concepts behind are forgotten. So what are they? Especially when it comes to the adjoint representation in quantum field theory it is rarely distinguished which one is meant: the Lie group or the Lie algebra. Both have one and it is not the same! But why do they have an identical name? And what has it to do with operations? They are used like basic arithmetic is used. However, there is more to them, than only arithmetic. I have provided a list of textbooks at the end of this article, because this terminology is so fundamental to physics and of an enormous importance. The books can serve as good companions on these subjects throughout a lifetime, and you will more than once pick them from the shelf to read some chapters or look up important definitions and theorems. 1)

### Arithmetic Operations and the division by zero

Everybody knows the basic arithmetic operators addition, subtraction, multiplication and division. They connect two numbers and form a new one. Let’s consider the real numbers. We basically have two groups, one which we write with a ##+## sign and one with a ##\cdot ## sign. Group here means we have associativity, a neutral and an inverse element. The multiplicative group does not contain ##0##, so the question how to divide by zero doesn’t even arise at this level. Let’s write these two groups ##G_+## and ##G_*##. Now we all know how to mix these two operations. \label{AO-I}
\begin{aligned}
G_* \times G_+ &\longrightarrow G_+ \\
(r,p) &\longmapsto r\cdot p
\end{aligned}

There is a certain property which tells us how ##G_*## has to handle the group structure of ##G_+##
$$(r,p+q) = (r,p) + (r,q) = r\cdot p + r \cdot q$$
We call it the distributive law. It is actually an example of an operation of ##G_*## on ##G_+\,##. Here we have to define, how elements ##r \in G_*## deal with ##0 \in G_+##. The distributive law forces us to define ##r\cdot 0 = 0\,##. The interesting point is: There is still no need to even think about the division by zero. It simply does not occur in the concept. Strictly speaking, even ##0 \cdot 0## doesn’t occur. And if we’re honest, it isn’t really needed at all. But ##0 \cdot 0 = 0## is forced by the distributive law, too. Of course this is a rather algebraic point of view, but don’t we call these operations on real numbers algebra?

Other functions like $$(b,a) \longmapsto \log_b a\; , \; (a,n \longmapsto a^n)\; , \; (n,r)\longmapsto \sqrt[n]{r}\; , \;(n,k)\longmapsto \binom{n}{k}$$
can also be viewed as operations with certain rules.

### Linear and Non-Linear Operators, Functionals and Hilbert spaces

Operators are subject to functional analysis 2) . The terms are a bit confusing:

Operators are functions, functionals are certain operators, and functional analysis deals with operators on spaces, where the elements are functions.

O.k. so far? Let’s get some order in this mess. We start with the linear case.

Linear means we consider vector spaces over fields. Historically and from the point of view by applications, those fields are in general ##\mathbb{R}## or ##\mathbb{C}##. These are the most important ones and those which are well suited to describe physical processes. The basic difference to linear algebra is, that our vectors are functions themselves: continuous functions, differentiable functions, bounded functions or sometimes sequences. It’s obvious that those vector spaces are generally not finite dimensional. However, they are so important, that we gave them names:

A real or complex vector space with a dot product ##\langle x,y \rangle## (scalar product, inner product) which induces a norm by ##||x|| = \sqrt{\langle x,x \rangle}## is called a Pre-Hilbert space and Hilbert space if it is complete, i.e. all Cauchy sequences converge. The latter means, if elements of a sequence get closer and closer to each other, then we can find a limit. This is the difference between the rational and the real numbers: we can find sequences of rationals, which get closer and closer towards ##\sqrt{2}## but this point doesn’t belong to the rational numbers. In the real numbers, it does. Therefore we call the real numbers complete and the rationals not. If we drop the requirement of a dot product, and only want to consider a complete real or complex vector space with a norm, then we call it Banach space. Important examples are Lebesgue spaces, which deal with measurable subsets of ##\mathbb{R}^n##. 3)  4)

Now a linear operator ##L## is simply a linear map between vector spaces: ##L : V \rightarrow W##. The elements of ##V## on which ##L## is defined is the domain of ##L## and its image, i.e. the elements of ##W## which are hit by ##L## is the range or image of ##L## in its codomain ##W##. The only difference to linear algebra is, that if we say operator, then we usually mean (infinite dimensional) Pre-Hilbert, Hilbert, or Banach spaces. 5)  Certain classes of functions as those mentioned above form those vector spaces. If ##W## happens to be ##\mathbb{R}## or ##\mathbb{C}## then our operator ##L## is often called a functional. So in contrast to operations, where we have a pair of input parameters, we only have functions between vector spaces. The generally infinite dimension of the vector spaces, however, can make quite a difference for theorems and methods used in comparison to basic linear algebra. Also the scalar fields considered here are the real or complex numbers, which also is a difference to a purely algebraic point of view with eventually finite fields.

The main development of the algebra of the infinite was achieved in the 19th and early 20th century: “And, however unbelievable this may seem to us, it took quite a long time until it has been clear to mathematicians, that what the algebraists write as ##(I-\lambda L)^{-1}## for a matrix ##L##, is essentially the same as the analysts represent by ##I+\lambda L + \lambda^2 L^2 + \ldots ## for a linear operator ##L##.” 6)

In the non-linear case 7) , operators are (non-linear) functions between normed (in general infinite dimensional) vector spaces (Banach spaces), and (non-linear) functionals are those which map to ##\mathbb{R}## or ##\mathbb{C}##.

Important operators in physics are (among others) the density operator, the position operator, the momentum operator or the Hamiltonian (Hamilton operator) or simply the differential ##\frac{d}{dx}##, the Volterra operator ##\int_0^x dt ## or the gradient ##\nabla##. Sometimes entire classes of operators are considered, e.g. compact operators, which map bounded sets to those whose closure is compact.

### Operations and Representations

Here we are by real operations again, where the elements of one object (e.g. groups, rings, fields or algebras) transform elements of another object (e.g. sets, vector spaces, modules). This means we have an operation
\label{OR-I}
\begin{aligned}
G \times V &\longrightarrow V \\
(g,v) &\longmapsto g.v
\end{aligned}

A common example are matrix groups ##G## and vector spaces ##V## 8)  where the operation is the application of the transformation represented by the matrix. A orthogonal matrix (representing a rotation)
\label{OR-II}
\begin{aligned}
g = \begin{bmatrix}
\cos \varphi & -\sin \varphi \\
\sin \varphi & \cos \varphi
\end{bmatrix}
\end{aligned}

transforms a two dimensional vector in its by ##\varphi## rotated version. Let’s consider – for the sake of simplicity – this example, i.e. ##G=SO(2,\mathbb{R})## is the group of rotations in the plane ##V=\mathbb{R}^2##. This is the major reason why operations are considered: Can we find out something either about ##G## or about ##V## by means of an operation, which we otherwise would have difficulties to examine? E.g. the operation of certain matrices on vector spaces are used to describe the spin of fermions. Thus as we did in our first example, we always have to tell, how the operating elements (##g \in G##) handle the structure of the set they operate on (##v \in V##). Of course only if there is a structure. In our example the operation respects the vector space structure
\label{OR-III}
\begin{aligned}
g.(\lambda v + \mu w) = \lambda (g.v) + \mu (g.w)
\end{aligned}

It means, it doesn’t matter if we rotate the result of an operation of vectors, or perform the operation after we rotated the components. The operation respects linearity and is called a linear operation. We will see that similar is true, if ##V## carries other structures as, e.g. a Lie algebra structure. Usually we require the operation to have the properties inherited by the nature of objects we deal with: linear operations on vector spaces, isometric operations on geometric objects, continuous (or differentiable) operations on continuous (or differentiable) functions, smooth operations on smooth manifolds and so on. But in any case we have to define how the objects have to be handled, especially their structure.

Important elements of an operation are orbits and stabilizers. An orbit of ##v\in V## is the set $$G.v = \{g.v \in V \,\vert \,g \in G\}$$
It is the set of all elements of ##V## which can be reached by the operation. In case of groups ##G##, orbits are equivalence classes. If we can reach every point ##w## form any point ##v## by a certain group element, i.e. ##w \in G.v## for all ##v,w \in V\,,## then the operation is called transitive. Somehow corresponding to orbits are stabilizers, which are all elements of ##G## which leave a given element of ##V## unchanged, i.e. $$G_v = \{g \in G\,\vert \, g.v=v\}$$
In case of groups, if only the neutral element stabilizes elements, i.e. ##G_v=\{e\}## for all ##v\in V##, the operation is called free. ##G## operates freely on ##V##. If only the neutral element ##e## fixes all ##v##, the operation is called faithful or injective. Free operations on non-empty sets are faithful.

Let us consider our example and take ##v=(1,-2)## and ##w=(1,1)##. Then the orbit of ##v## is a circle with radius ##\sqrt{5}## and ##w## cannot be reached from ##v## by rotation, which means our operation is not transitive. We also see, that an orbit doesn’t need to be a subspace. However, concentric circles are equivalence classes.

For the stabilizer, which is always a subgroup, we will have to be careful with the definition of ##G##. If we restrict ourselves to values ##\varphi \in [0,2 \pi) ## then we get a free operation, as only the rotation by ##0## stabilizes elements. But if we allow any real number as an angle, then ##G_v= 2\pi \mathbb{Z}##
If we generally consider a group ##G## which operates on a vector space ##V## we usually require the property (in this case being a group) to be respected. This means ##e.v=e## and ##g.(h.v) =(gh).v## and ##g^{-1}.g.v=v##. Now these properties can be summarized by saying
\label{OR-IV}
\begin{aligned}
\varphi \, : \, G &\longrightarrow GL(V) \\
\varphi \, : \, g &\longmapsto (v \longmapsto g.v)
\end{aligned}

is a group homomorphism, where ##GL(V)## is the general linear group of ##V##, the group of all regular linear functions ##V \longrightarrow V##. Homomorphism means, ##\varphi## maps group to group and ##\varphi(g\cdot h)=\varphi(g) \cdot \varphi(h)##. In this case ##V## is called representation space and ##\varphi## a representation of ##G##. Thus an operation and a representation are the same thing: it’s only a different way to look at it. One emphasizes the group side of it, the other the vector space side. 9) 10)

Let me finish with three important examples of representations which play a crucial role in the standard model of particle physics. Therefore let ##G## be a matrix group, e.g. a Lie group like the unitary group and ##V## a vector space where this group applies to, e.g. ##\mathbb{C}^n##.

The first example is a pure group operation on itself
\label{V}
\begin{aligned}
G \times G &\longrightarrow G\\
g.h &\longmapsto (ghg^{-1})
\end{aligned}

Here ##G## operates not on a vector space, but on itself instead. A group element ##g## defines a bijective map from ##G## to ##G##, which is called conjugation or inner automorphism. If there is a Lie algebra ##\mathfrak{g}## associated with ##G##, as for matrix groups and in case of the unitary group, the skew-Hermitian matrices, we get from this conjugation a naturally induced map ##(g\in G\, , \,X\in \mathfrak{g})##
\label{VI}
\begin{aligned}
\end{aligned}

which is called adjoint represenatation of ##G##.11)
Its representation space is now the Lie algebra ##\mathfrak{g}\,##, which is the tangent space of smooth functions in ##G## at the neutral element, the identity matrix, and as such a vector space. It further means, there is a group homomorphism ##\operatorname{Ad}## of matrix groups (Lie groups). Therefore ##GL(\mathfrak{g})## has also an associated Lie algebra, called the general linear Lie algebra ##\mathfrak{gl(g)}\,##, which is basically nothing else as all square (not necessarily regular) matrices the size of ##\operatorname{dim}\mathfrak{g}\,##. The Lie algebra multiplication is given by the commutator
$$[X,Y] = X \cdot Y – Y \cdot X$$
The left multiplication in ##\mathfrak{g}## gives rise to a Lie algebra operation
\label{VII}
\begin{aligned}
\varphi\, : \,\mathfrak{g} \times \mathfrak{g} & \longrightarrow \mathfrak{g}\\
(X,Y) &\longmapsto [X,Y]
\end{aligned}

Remember that we said an operation is required to respect the structure. Here it means, that ##\varphi## is a Lie algebra homomorphism
$$\varphi([X,Y])(Z) = [\varphi(X),\varphi(Y)](Z)=\varphi(X)\varphi(Y)(Z)-\varphi(Y)\varphi(X)(Z)$$
which is nothing else than the Jacobi identity. Furthermore as an operation is always a representation, we get a representation
\label{VIII}
\begin{aligned}
\end{aligned}

which is called adjoint representation of ##\mathfrak{g}\,##.12) It’s simply the left-multiplication in the Lie algebra. The homomorphism property, which is the Jacobi identity, is also the defining property of a derivation:
$$\operatorname{ad}([X,Y])= [\operatorname{ad}(X),Y]+[X,\operatorname{ad}(Y)]$$
Therefore the adjoint representation ##\operatorname{ad}## is also called an inner derivation of ##\mathfrak{g}\,##; the same as a conjugation is called an inner automorphism of ##G##. Both adjoint representations, the one of ##G## and the one of ##\mathfrak{g}## are related by the following formula ##(X \in \mathfrak{g})##
\label{IX}
\begin{aligned}
\end{aligned}

The defining property for a derivation is just the Leibniz rule of differentiation (product rule). Also closely related are the Lie derivative and the Levi-Civita connection. In the end all of them are just versions of the Leibniz rule we learned at school.

### Summary

• Arithmetic operations: ##+\; , \;-\; , \;\cdot \; , \; :##
• (Linear) Operators: ##L : (V,||.||_V) \longrightarrow (W,||.||_W)##
• (Linear) Functionals: ##L : (V,||.||_V) \longrightarrow (\mathbb{R},|.|_\mathbb{R}) ## or ##(\mathbb{C},|.|_\mathbb{C}) ##
• Operation in general: ##Operator\,.\,Object_{old} = Object_{new}##
• Group Operations: ##G \times V \longrightarrow V##
• Group Representation: ##G \longrightarrow GL(V)##
• Conjugation in groups: ##g.h = ghg^{-1}##

24 replies
1. samalkhaiat says:
fresh_42

…..
And this is the point where this "infinitesimal generator" terminology loses me:

In physics, an infinitesimal generator (mostly of a symmetry group) is related to what mathematicians call a derivation or vector field. We always speak of infinitesimal generators of a group and its representations. So, in this sense $mathfrak{g}$ is the infinitesimal generator of $G$, $mbox{ad}$ is the infinitesimal generator of $mbox{Ad}$, and more generally, for simply connected Lie group $G$, $mbox{d}pi$ is the infinitesimal generator of $G$ in the representation $pi$.

The group of automorphisms ( $t to alpha_{t}$) of a $C^{*}$-algebra, $mathcal{A}$, has an infinitesimal generator given by $$delta (A) = lim_{t to 0} frac{alpha_{t}(A) – A}{t} ,$$ with natural domain $$D(delta) ={ forall A in mathcal{A} | lim_{t to 0}t^{-1}(alpha_{t}(A) – A) mbox{exist} } .$$

Clearly $delta (A)$ is a *-derivation, i.e., linear operation which commutes with complex conjugation/adjoint, $left(delta (A) right)^{*} = delta ( A^{*})$, and satisfies the Leibnitz rule. Indeed, writing $$frac{1}{t}left( alpha_{t}(AB) – ABright) equiv frac{1}{t}left( alpha_{t}(A) – Aright) cdot B + alpha_{t}(A) cdot frac{1}{t} left( alpha_{t}(B) – Bright) ,$$ and taking the limit, we get $$delta(AB) = delta (A) cdot B + A cdot delta (B) .$$

In QM, the algebra $mathcal{A}$ is given by $B(mathcal{H})$, the algebra of bounded operators on separable Hilbert space. In this case one can show that for every 1-parameter group of automorphisms of $B(mathcal{H})$ (continuous in the weak operator topology), there exists a 1-parameter group of unitary operators $U(t) = e^{iHt}$ (continuous in the weak topology) such that $$alpha_{t}(A) equiv A(t) = e^{iHt} A e^{-iHt} .$$ Here, we have the so-called inner derivation $$delta_{H}(A) = i[H , A],$$ with domain $$D(delta_{H}) = { forall A in B(mathcal{H}) | [H,A] mbox{is dense} } .$$ When the derivation $delta$ is inner derivation $delta_{X}$, we call $X$ the infinitesimal generator. The same extends to every representation $rho$ of a $C^{*}$-algebra on the Hilbert $mathcal{H}_{rho}$, $$rho(delta_{H}(A)) = i [H_{rho} , rho (A)], rho (A) in D(delta) .$$ In this case, we call $H_{rho}$ the infinitesimal generator of the group of automorphisms in the representation $rho$.

2. fresh_42 says:
samalkhaiat

In physics, an infinitesimal generator (mostly of a symmetry group) is related to what mathematicians call a derivation or vector field. We always speak of infinitesimal generators of a group and its representations….

Thank you. It drove me nuts what exactly a generator is meant to be. On all occasions I have seen it here, nobody ever explained it and in 90% of the cases I got the impression, that the poster himself wasn't sure. I have a 400 pages thick book about Lie groups, but Varadarajan doesn't mention "generator" at all. Maybe in the context of one-parameter subgroups, not quite sure. I bookmarked your post so I can look it up for details next time I'll try to understand someone speaking of generators or quote it as reference here. But I think I've understood the point. I'm coming from the algebraic side of it, and in this contexts, they are simply vectors and what matters alone is, in which vector space, algebra, vector field or bundle. An information which is basically never given explicitly.

Maybe I'll also ask you about the ##i## which mathematically turns a homomorphism into an anti-homomorphism but nobody ever cares. I understand that it doesn't change representations qualitatively, nevertheless I find it a bit sloppy, e.g. when Pauli matrices are used.

3. samalkhaiat says:
fresh_42

Maybe I'll also ask you about the ##i## which mathematically turns a homomorphism into an anti-homomorphism but nobody ever cares. I understand that it doesn't change representations qualitatively, nevertheless I find it a bit sloppy, e.g. when Pauli matrices are used.

When $G$ is a (Lie) symmetry group of the physical system, the dynamical variables of the system transform in some unitary representation $rho$ of $G$, and the “infinitesimal generators” in the representation $rho$ are (either directly or indirectly via Noether theorem) represent the observables of the system, i.e., the real quantities (numbers) that we measure in the Lab. In quantum mechanics, observables are represented by Hermitian operators/matrices. So, we need those generators to be Hermitian not the anti-hermitian that are used in the mathematical literatures: In some coordinates $alpha^{a}$ on the group manifold we may use the exponential map and write $$rho(g) = e^{alpha^{a} left( d rho (X_{a})right)} .$$ But $d rho (X_{a})$ is not good for us, so we instead define the hermitian generators by $$i d rho (X_{a}) = J_{a} ,$$ and rewrite $$rho (g) = e^{- i alpha^{a} J_{a}} .$$ In field theory, Noether theorem dresses these $J_{a}$’s by the dynamical pair $(varphi^{r} , pi_{r})$, giving us the so-called Noether charges. These are Lorentz scalar and time-independent (i.e., conserved) objects given by $$Q_{a} = int_{mathbb{R}^{3}} d^{3}x pi_{r} (x) J_{a} varphi^{r}(x) .$$ In this case, and to annoy you more, we actually call the Noether charges the generators of the symmetry group of the transformations in the unitary representation $rho (G)$. The reason for term infinitesimal generators is natural in this case because these $Q_{a}$’s do actually generate the correct infinitesimal transformations on the dynamical variables $$delta_{a}varphi^{r} (x) = [i Q_{a} , varphi^{r}(x)] ,$$ and satisfy the Lie algebra of the group $G$ $$[Q_{a} , Q_{b}] = i C_{ab}{}^{c} Q_{c} .$$