manifold2

A Journey to The Manifold SU(2): Representations

Estimated Read Time: 16 minute(s)
Common Topics: lie, algebra, representation, algebras, dimensional

Part 1

 

Representations

Image source: [23]

 

6. Some useful bases of ##\mathfrak{su}(2,\mathbb{C})##

Notations can differ from author to author: the numbering of the Pauli matrices ##(\text{I 4}), (\ref{Pauli-I})##, the linear combinations of them in the definition to basis vectors ##\mathfrak{B}## of ##\mathfrak{su}(2,\mathbb{C}) \; (\text{I 5}), (\ref{Pauli-II}), (\ref{Pauli-III})##, the embedding of the orthogonal groups ##(\text{I 1})## with a new first instead of a new last dimension, the choice of ##(z,w)##, resp. their signs in the representation as  ##SU(2,\mathbb{C})## matrices ##(\text{I 3})##, etc. All affect the representation of vectors and matrices according to these bases, so – as it is always the case if bases are chosen – be sure you know the choices made.

6.1. The Pauli Matrices

I’ve already mentioned the Pauli matrices
\begin{equation}\label{Pauli-I}
\sigma_1=\begin{bmatrix}
0&1\\1&0
\end{bmatrix}\, , \,
\sigma_2=\begin{bmatrix}
0&-i\\i&0
\end{bmatrix}
\, , \,
\sigma_3=\begin{bmatrix}
1&0\\0&-1
\end{bmatrix}
\end{equation}
which are not a basis of ##\mathfrak{su}(2,\mathbb{C})##. They fulfill the condition of a vanishing trace, but are not skew-Hermitian. Therefore they have to be multiplied by ##i## to achieve a basis ##\mathfrak{B}##
\begin{equation}\label{Pauli-II}
\mathfrak{e}_1=i \cdot \sigma_1=\begin{bmatrix}
0&i\\i&0
\end{bmatrix}\, , \,
\mathfrak{e}_2=i \cdot \sigma_2=\begin{bmatrix}
0&1\\-1&0
\end{bmatrix}  \, , \\
\mathfrak{e}_3=i \cdot \sigma_3=\begin{bmatrix}
i&0\\0&-i
\end{bmatrix}
\end{equation}
In this special case, the matrices of ##\mathfrak{B}## belong to the group ##SU(2,\mathbb{C})## as well as to its Lie algebra ##\mathfrak{su}(2,\mathbb{C})##, which is not true anymore for the Gell-Mann matrices and ##SU(3,\mathbb{C})##. If we scale ##\mathfrak{B}## by setting
\begin{equation}\label{Pauli-III}
U = \frac{1}{2} \mathfrak{e}_1\, , \,V = \frac{1}{2} \mathfrak{e}_2\, , \,W = -\frac{1}{2} \mathfrak{e}_3
\end{equation}
we get an especially easy to remember Lie multiplication
\begin{equation}\label{Pauli-IV}
[U,V]=W\, , \,[V,W]=U\, , \,[W,U]=V
\end{equation}
because all products are achieved by a cyclic shift to the left.

6.2. Lie Algebra of type ##A_1##

The Lie algebra ##\mathfrak{su}(2,\mathbb{C})## is a simple Lie algebra, which means it has no proper ideals. It is the smallest of all simple Lie algebras and its Dynkin diagram is just a single dot ##\boxdot##, the type ##A_1##. Its maximal toral subalgebra, the nilpotent and self-normalizing Cartan subalgebra, is one-dimensional with basis ##\{\mathfrak{e}_3\}##. A toral subalgebra means, that its elements allow a simultaneous diagonalization of left multiplication, which is the adjoint representation ##\mathfrak{ad}## of the Lie algebra. Now if we look up in the classification table of simple Lie algebras, we will find ##\mathfrak{sl}(2,\mathbb{C})## associated with type ##A_1##, the Lie algebra of all linear transformations with vanishing trace. The fact, that both Lie algebras are basically the same is not immediately obvious, but has the advantage, that the representations of ##\mathfrak{sl}(2,\mathbb{C})## are well-known and we can use them. The basis transformation between the two is defined by
\begin{equation}\label{XII}
\begin{aligned}
H &= i \cdot \mathfrak{e}_3 = – \sigma_3 = -2iW \\[6pt]
X &= \frac{1}{2}\,\mathfrak{e}_1 – \frac{1}{2}\,i\,\mathfrak{e}_2 = \frac{1}{2}\,i\, \sigma_1 + \frac{1}{2}\,\sigma_2=U-iV\\[6pt]
Y &= -\frac{1}{2}\,\mathfrak{e}_1 – \frac{1}{2}\,i\,\mathfrak{e}_2 = -\frac{1}{2}\,i\, \sigma_1 + \frac{1}{2}\,\sigma_2=-U-iV
\end{aligned}
\end{equation}
The basis vector ##X## is the nilpotent operator that climbs the ladder up, and the basis vector ##Y## is the nilpotent operator that climbs the ladder down. They span the eigenspaces of the semisimple, toral, i.e. diagonalizable transformation ##H## with the eigenvalues ##\pm 2##, i.e. the Lie multiplication is
\begin{equation}\label{XIII}
[H,X]=2\cdot X\; , \; [H,Y]=-2\cdot Y\; , \; [X,Y]=H
\end{equation}
We note that ##H## spans the Cartan subalgebra that determines the root structure and therewith the representations. The notation of diagonalization is a bit sloppy here, because it uses the important result, that the Jordan-Chevalley decomposition of the matrices equals the abstract Jordan-Chevalley decomposition of the operators ##\mathfrak{ad}(X)\, , \,\mathfrak{ad}(Y)## and ##\mathfrak{ad}(H) \,[6]##.

As an example of actual vector fields, which can be viewed as elements of a Lie algebra, we can think of the following operators on smooth, real-valued functions ##\mathcal{C}^\infty(\mathbb{R})## in one variable ##x##
$$
H=2x \cdot \frac{d}{dx} \, , \, X=x^2\cdot \frac{d}{dx}\, , \, Y=- \frac{d}{dx}
$$
or in terms of Pauli matrices
$$
\sigma_1 \triangleq -ix^2\frac{d}{dx}-i\frac{d}{dx}\, , \,\sigma_2 \triangleq x^2\frac{d}{dx} – \frac{d}{dx} \, , \, \sigma_3 \triangleq -2x\frac{d}{dx}
$$
The Lie algebra elements ##H,X,Y## are expressed by the matrices
$$
H=\begin{bmatrix}
-1&0\\0&1
\end{bmatrix} \, , \,
X=\begin{bmatrix}
\,0&0\, \\ \,i&0\,
\end{bmatrix} \, , \,
Y=\begin{bmatrix}
\,0&-i\, \\ \,0&0\,
\end{bmatrix}
$$
according to the same basis of ##\mathbb{C}^2## as used above. However, they can also be represented by the real matrices
\begin{equation}\label{XIV}
H=\begin{bmatrix}
1&0\\0&-1
\end{bmatrix} \, , \,
X=\begin{bmatrix}
\,0&1\, \\ \,0&0\,
\end{bmatrix} \, , \,
Y=\begin{bmatrix}
\,0&0\, \\ \,1&0\,
\end{bmatrix}
\end{equation}
which also shows the “ladder climbing” and their semisimple and nilpotent nature as operators. Especially when it comes to exponentiation ##Z \mapsto \exp(Z)##, the matrix representation (\ref{XIV}) may be easier to deal with.

7. Representations of ##\mathfrak{su}(2,\mathbb{C})##

There is only one three dimensional (semi-) simple Lie Algebra which we presented in various ways: as Dynkin diagram ##A_1##, as the Lie algebra of matrices with determinant ##1,\, SL(2,\mathbb{C})##, or as Lie algebra of the isometries of  ##SU(2,\mathbb{C})##. The Lie algebra isomorphism we used was
\begin{equation}\label{XV}
\begin{aligned}
\varphi\, : \,&\;\mathfrak{sl}(2,\mathbb{C}) &\longrightarrow &&\mathfrak{su}(2,\mathbb{C})\\
&\;\operatorname{span}_\mathbb{C} \{H,X,Y\} &\longrightarrow&&\operatorname{span}_\mathbb{C} \{U,V,W \}
\end{aligned}
\end{equation}
with
$$
\varphi = \begin{bmatrix}
0&1&-1 \\ 0&-i&-i \\ -2i&0&0
\end{bmatrix}
$$

A representation ##(V,f)## of a Lie algebra ##\mathfrak{g}## is a vector space ##V## and a Lie algebra homomorphism ##f\, : \,\mathfrak{g}\longrightarrow \mathfrak{gl}(V)##, i.e. ##f## is linear and $$f([A,B])(v)=([f(A),f(B)])(v)=f(A)(f(B)(v)-f(B)(f(A)(v)$$
or short without explicitly mentioning the operation ##f##
\begin{equation}\label{XXV}
[A,B].v=A.B.v -B.A.v
\end{equation}

Now if ##(V,g)## is a representation of ##\mathfrak{su}(2,\mathbb{C})## then ##(V,f)## with ##f := g \circ \varphi## defines a representation of ##\mathfrak{sl}(2,\mathbb{C})##, which in return means we don’t have to bother which version of the three dimensional, simple Lie algebra we take, if we are interested in its representations. Thus we will consider w.l.o.g. the representations of ##\mathfrak{sl}(2,\mathbb{C})## with the more convenient basis ##(6)## and ##(7)## instead of the representations of ##\mathfrak{su}(2,\mathbb{C})##, and among those the finite dimensional, irreducible representations (irreps), which means ##\operatorname{dim}V < \infty##, and ##\{0\}## and ##V## are the only subspaces ##U## of ##V## with ##f(\mathfrak{sl}(2,\mathbb{C}))(U)\subseteq U##.

7.1. Representations of ##\mathfrak{sl}(2,\mathbb{C})##, weights and CSA

Let ##(V,f)## be a finite dimensional representation of ##\mathfrak{sl}(2,\mathbb{C})##, or equivalently a finite dimensional vector space, where ##\mathfrak{sl}(2,\mathbb{C})## operates on, or equivalently a finite dimensional ##\mathfrak{sl}(2,\mathbb{C})## module. All these wordings mean simply equation (\ref{XXV}). The maximal toral subalgebra of ##\mathfrak{sl}(2,\mathbb{C})## is ##\mathfrak{h} = \mathbb{C}\cdot H##. It is the Cartan subalgebra (CSA) of ##\mathfrak{sl}(2,\mathbb{C})##. So ##\mathfrak{h}## acts semisimple on ##V##, i.e. the action of ##\mathfrak{h}## on ##V## can be diagonalized and ##V## can be written as a direct sum of subspaces ##V_\lambda = \{v\in V\, : \, f(H)(v)=H.v=\lambda  v \}##. Here we use, that ##\mathbb{C}## is algebraically closed, which guarantees that all eigenvalues ##\lambda ## exist in the scalar field ##\mathbb{C}##.
\begin{equation}\label{XXVI}
V=\bigoplus_{\lambda \in \mathbb{C}}\;V_\lambda = \bigoplus_{\lambda\in \mathbb{C}}\; \{ v\in V\, : \,H.v=\lambda v \}
\end{equation}
If ##\lambda## isn’t an eigenvalue, then ##V_\lambda = \{0\}.## The subspaces ##V_\lambda## are called weight spaces and ##\lambda## a weight of ##f(H)## on ##V##. In general, i.e. for other semisimple Lie algebras as e.g. ##\mathfrak{su}(3,\mathbb{C})##, the weights ##\lambda## are linear forms of ##\mathfrak{h}##, but as the Cartan subalgebra is one dimensional in our case, the linear span of ##H##, the weights ##\lambda ## are simply complex numbers here. Indeed they are integers.

Theorem: (Classification of ##\mathfrak{sl} (2,\mathbb{C})## representations)

A proof can be found in ##[6]##. For especially the situation of ##\mathfrak{su}(2,\mathbb{C})## see ##[8]##.

Let ##(V,f)## be an irreducible, ##(m+1)## dimensional representation of

$$\mathfrak{sl}(2,\mathbb{C})= \\ \operatorname{span}_\mathbb{C}\{H,X,Y\, : \,[H,X]=2X,[H,Y]=-2Y,[X,Y]=H\}$$

  1. All weights ##\lambda##, i.e. the eigenvalues of the semisimple (diagonizable) operation of ##H## on ##V## are integers and the weight spaces (eigenspaces) ##V_\lambda## of this operation are one dimensional. The highest (maximal) weight is ##m## and a vector ##v_m \in V_m## is called maximal vector.
  2. ##\displaystyle V = \bigoplus_{\stackrel{k=0}{\lambda=-m+2k}} ^{m}V_{\lambda} = \bigoplus_{\stackrel{k=0}{\lambda=-m+2k}}^{m}\;\{ v\in V\, : \,H.v=\lambda \cdot v \}##
  3. There is up to isomorphisms only one unique finite dimensional, irreducible representation of ##\mathfrak{sl}(2,\mathbb{C})##, resp. ##\mathfrak{su}(2,\mathbb{C})## per dimension of the representation space ##V##.
  4. Let ##v_m## be a maximal vector. Then for ##k=0,\ldots , m## we define
    $$
    v_{m-2k-2} := \frac{1}{(k+1)!}\;Y^{k+1}.v_m\; \text{ and } \;v_{-m-2}=v_{m+2}=0
    $$
    and get the following operation rules
    \begin{equation*}
    \begin{array}{ccc}
    H.v_{m-2k}&=&(m-2k)\;v_{m-2k}\\
    X.v_{m-2k}&=&(m-k+1)\;v_{m-2k+2}\\
    Y.v_{m-2k}&=&(k+1)\;v_{m-2k-2}
    \end{array}
    \end{equation*}
  5. If ##V## is any (not necessarily irreducible) finite-dimensional representation, then the eigenvalues are all integers, and each occurs along with its negative an equal number of times. In any decomposition of ##V## into irreducible submodules, the number of summands is precisely ##\operatorname{dim}V_0 + \operatorname{dim} V_1\,##.

7.2. The adjoint representations of ##SU(2,\mathbb{C})## and ##\mathfrak{su}(2,\mathbb{C})##

Let us return to ##SU(2,\mathbb{C})## generated as a group by
\begin{equation}\label{R_I}
\mathfrak{B}=\left\{ \mathfrak{e_1}=i\sigma_1=\begin{bmatrix}
0&i \\ i&0
\end{bmatrix}, \\
\mathfrak{e_2}=i\sigma_2=\begin{bmatrix}
0&1 \\ -1&0
\end{bmatrix}, \\
\mathfrak{e_3}=i\sigma_3=\begin{bmatrix}
i&0 \\ 0&-i
\end{bmatrix}
\right\}
\end{equation}
as defined in section ##\text{I 3}##. A bit confusing might be, that ##\mathfrak{B}## served as a basis of ##\mathfrak{su}(2,\mathbb{C})## defined (with a different scaling) in ##(3)## with Lie multiplication ##(4),## but the term basis doesn’t make sense for a group. The fact that the same matrices are elements of both objects, the Lie group ##SU(2,\mathbb{C})## as well as its Lie algebra ##\mathfrak{su}(2,\mathbb{C})## can lead to confusion. It’s therefore important to distinguish between the two adjoint representations despite their common representation space ##V=\mathfrak{su}(2,\mathbb{C})## and name. W.r.t. ##\mathfrak{B}## we get for both adjoint representations
\begin{equation}\label{Ad-I}
\begin{aligned}
\operatorname{Ad}&: &SU(2,\mathbb{C}) & \longrightarrow & \;&GL(\mathfrak{su}(2,\mathbb{C})) \\
\operatorname{Ad}&: &g &\longmapsto &\;&\left(X \mapsto gXg^{-1}\right)
\end{aligned}
\end{equation}
with
\begin{equation*}\label{Ad-II}
\operatorname{Ad}(\mathfrak{e}_1) = \begin{bmatrix}
1&0&0\\0&-1&0\\0&0&-1
\end{bmatrix}\; , \;\operatorname{Ad}(\mathfrak{e}_2)= \begin{bmatrix}
-1&0&0\\0&1&0\\0&0&-1
\end{bmatrix}\; ,\\ \;\operatorname{Ad}(\mathfrak{e}_3)= \begin{bmatrix}
-1&0&0\\0&-1&0\\0&0&1
\end{bmatrix}
\end{equation*}
and
\begin{equation}\label{ad-III}
\begin{aligned}
\mathfrak{ad}&: &\mathfrak{su}(2,\mathbb{C}) & \longrightarrow & \; &\mathfrak{gl}(\mathfrak{su}(2,\mathbb{C})) \\
\mathfrak{ad}&: &X &\longmapsto &\;&\left(Y \mapsto [X,Y]=XY-YX\right)
\end{aligned}
\end{equation}
with
\begin{equation*}\label{ad-IV}
\mathfrak{ad}(\mathfrak{e}_1) = \begin{bmatrix}
0&0&0\\0&0&2\\0&-2&0
\end{bmatrix}\; , \;\mathfrak{ad}(\mathfrak{e}_2)= \begin{bmatrix}
0&0&-2\\0&0&0\\2&0&0
\end{bmatrix}\; ,\\  \;\mathfrak{ad}(\mathfrak{e}_3)= \begin{bmatrix}
0&-2&0\\2&0&0\\0&0&0
\end{bmatrix}
\end{equation*}
The matrices of ##\operatorname{Ad}(\mathfrak{e}_i)## w.r.t. ##\mathfrak{B}## are diagonal, so these basis vectors of ##\mathfrak{su}(2,\mathbb{C})## span each an ##SU(2,\mathbb{C})## invariant subspace. On the other hand the weight spaces ##V_\lambda ## of the adjoint representation ##\mathfrak{ad}## introduced in the previous paragraph ##7.1## are not similarly obvious. Although the maximal toral subalgebra, the Cartan subalgebra CSA which is used to find them, is spanned by the basis vector ##\mathfrak{e}_3## of ##\mathfrak{B}##, we need to use a basis ##\mathfrak{B}’=\{H,X,Y\}## defined in ##(5)## and ##(6)## to directly see the weight spaces ##V_{-2},V_0,V_2## from our theorem, because they lie in a way diagonal to the other basis vectors ##\mathfrak{e}_2,\mathfrak{e}_3\,##.

7.3. Generators

Physicists often speak of infinitesimal generators of a matrix group ##G## and a representation ##f : G \longrightarrow GL(V)## if they refer to their tangent spaces. So, in this sense ##\mathfrak{g}\cong T_e(G)## is the infinitesimal generator of ##G##, ##\mathfrak{ad}## is the infinitesimal generator of ##\operatorname{Ad}##, and more generally, for a simply connected Lie group ##G##, ##df## is the infinitesimal generator of ##G## and the representation ##f ##. It would be better the tangent spaces were referred to than to use group-theoretic terms and transport them via the code words (infinitesimal) generator into the tangent spaces. Anyway, the adjoint representations here are the left multiplication, ##\mathfrak{ad}##, of the Lie algebra ##\mathfrak{g}##, and the conjugation of Lie algebra elements (vector fields at ##1##) by group elements, ##\operatorname{Ad}##, cp. section ##\text{I 3}, [2], [19]## and ##[16]##. They are related by
$$
\operatorname{Ad}(\exp X) = e^{\mathfrak{ad} X} \quad (X \in \mathfrak{g})
$$
As example let us consider the case of possible (up to diffeomorphisms) Lie algebras of vector fields on the real line ##[1]## instead. There are only three generating transformations, i.e. flows on ##\mathbb{R}##, cp. section ##\text{I 5}## and ##[17]##

\begin{equation}\label{XXVIII}
\begin{array}{ccccl}
\psi_x (\varepsilon , x) & : &x & \longmapsto & \frac{x}{1-\varepsilon x}\\[6pt]
\psi_h (\varepsilon , x) & : &x & \longmapsto & \frac{xe^{\varepsilon}}{e^{-\varepsilon}}= x\cdot e^{2 \varepsilon} \\[6pt]
\psi_y (\varepsilon , x)& : &x & \longmapsto & x – \varepsilon
\end{array}
\end{equation}
The conditions for a flow are

$$\psi_a(0,x)=x \quad \text{ and } \quad  \psi_a(\delta, \psi_a(\varepsilon,x))=\psi_a(\delta + \varepsilon , x)$$

which are easy to verify here. The transformations build one-parameter subgroups of ##G##, see sections ##\text{I 1}## and ##\text{I 5}##. All three generate the transformation group ##G=SL(2,\mathbb{R})## with elements
$$
x \longmapsto \frac{\alpha x +\beta}{\gamma x + \delta} \quad , \quad \alpha \cdot \delta – \beta \cdot \gamma = 1
$$
If we now compute $$A = d\psi_a = \left. \frac{d}{d\varepsilon}\right|_{\varepsilon = 0}\psi_a$$
we get the (one dimensional) real vector fields, resp. infinitesimal generators
\begin{equation}\label{XXIX}
\begin{array}{c}
X= d\psi_x = x^2 \frac{d}{dx}\\[6pt]
H = d\psi_h = 2x \frac{d}{dx} \\[6pt]
Y = d\psi_y = – \frac{d}{dx}
\end{array}
\end{equation}
These vector fields act on smooth real valued functions ##y(x)\, : \,\mathbb{R} \rightarrow \mathbb{R}## and obey
$$[X,Y](y)= (X\circ Y – Y \circ X )(y)=H(y) \,\text{ or }\, [X,Y]=H$$

$$[H,X](y)=(H\circ X – X \circ H )(y)=2X(y) \,\text{ or }\, [H,X]=2X $$

$$[H,Y](y)= (H\circ Y- Y \circ H )(y)=-2Y(y) \,\text{ or }\, [H,Y]=-2Y$$

which is exactly the basis we’ve seen in section ##6.2## of the three dimensional real, simple Lie algebra of type ##A_1##, i.e. ##\mathfrak{sl}_\mathbb{R}(2,\mathbb{R})##, represented by the matrices in ##(7)##. The other two Lie algebras of vector fields on ##\mathbb{R}## are the three one-dimensional subalgebras of ##\mathfrak{sl}_\mathbb{R}(2,\mathbb{R})##, and the two two-dimensional subalgebras ##\mathbb{R}\cdot H + \mathbb{R}\cdot X## or ##\mathbb{R}\cdot H + \mathbb{R}\cdot Y##, which are sometimes called a Borel subalgebra ##\mathcal{B}(\mathfrak{sl}_\mathbb{R}(2,\mathbb{R}))##, which means a maximal solvable subalgebra, and ##\mathfrak{sl}_\mathbb{R}(2,\mathbb{R})## itself.

7.4. ##\mathfrak{sl}(2,\mathbb{R})## as a representation of ##\mathcal{B}(\mathfrak{sl}(2,\mathbb{R}))##

Let’s add another representation which admittedly is purely mathematical. However, it is a simple example in order to show that not only the adjoint representation is a possibility. Here ##\mathfrak{sl}(2,\mathbb{R}) \cong \mathfrak{su}(2,\mathbb{C})## serves again as representation space but this time of the two-dimensional non Abelian Lie algebra, which is also a Borel subalgebra of ##\mathfrak{sl}(2,\mathbb{R})##.
$$\mathcal{B}(\mathfrak{sl}(2,\mathbb{R}))= \operatorname{span}_\mathbb{R}\{\, H,X\, : \,[H,X]=2X \,\}$$
For this purpose we define for an arbitrary Lie Algebra ##\mathfrak{g}## the vector space of antisymmetric linear transformations
\begin{equation}\label{As-I}
\begin{aligned}
\mathfrak{A}(\mathfrak{g}) = \left\{\alpha \in \mathfrak{gl}(\mathfrak{g}) \, : \, [\alpha(X),Y]+ [X,\alpha(Y)]=0 \,\forall \, X,Y \in \mathfrak{g}\right\}
\end{aligned}
\end{equation}
With a few simple calculations, we see that ##\mathfrak{A}(\mathfrak{g})## is again a Lie algebra and
\begin{equation*}
f \, : \,\mathfrak{g} \longrightarrow \mathfrak{gl}(\mathfrak{A}(\mathfrak{g}))
\end{equation*}

defined by

\begin{equation}\label{As-II} f(X)(\alpha)= X.\alpha =[\mathfrak{ad}(X),\alpha]= \mathfrak{ad}(X)\circ \alpha – \alpha \circ \mathfrak{ad}(X)
\end{equation}

is a Lie algebra homomorphism, which means that ##\mathfrak{g}## operates on ##\mathfrak{A}(\mathfrak{g})## or equivalently that ##\mathfrak{A}(\mathfrak{g})## is a ##\mathfrak{g}-##module. Obviously ##\mathfrak{A}(\mathfrak{g}) \neq \{0\}## for Abelian Lie algebras, but also for any complex solvable Lie algebra we have by Lie’s theorem ##[6]## an element ##X_0 \neq 0## with ##[X,X_0]=\lambda(X)\cdot X_0## and thus ##\alpha(X) := \lambda(X)\cdot X_0## is a non-trivial antisymmetric transformation. E.g. antisymmetric transformations obey rules like
\begin{equation}\label{As-III}
[\alpha(X),[Y,Z]]+[\alpha(Y),[Z,X]]+[\alpha(Z),[X,Y]]=0
\end{equation}
Now because our Borel subalgebra ##\mathcal{B}(\mathfrak{sl}(2,\mathbb{R}))## has such a one dimensional ideal ##\mathbb{R}\cdot X## as required, ##\mathfrak{A}(\mathcal{B}(\mathfrak{sl}(2,\mathbb{R})))\neq \{0\}##. A direct calculation shows that

$$\mathfrak{A}(\mathcal{B}(\mathfrak{sl}(2,\mathbb{R}))) \cong \mathfrak{sl}(2,\mathbb{R})\cong \mathfrak{su}(2,\mathbb{C})$$

and ##\mathfrak{sl}(2,\mathbb{R})## becomes a non-trivial ##\mathcal{B}(\mathfrak{sl}(2,\mathbb{R}))-##module, which is the representation ##f## we were looking for. For Borel subalgebras ##\mathcal{B}(\mathfrak{g})## of other simple Lie algebras ##\mathfrak{g} \ncong \mathfrak{sl}(2,\mathbb{R})## the antisymmetric Lie algebra ##\mathfrak{A}(\mathcal{B}(\mathfrak{g}))## is one dimensional, i.e. the antisymmetric transformation defined above is the only non trivial one. This shows once more that ##\mathfrak{su}(2,\mathbb{C}) \cong \mathfrak{sl}(2,\mathbb{R})## is a special case among the simple Lie algebras. There are simply not enough roots in ##\mathfrak{sl}(2,\mathbb{R})## to impose stronger conditions (cp. section ##6.2)##. I’ve said, that this is a purely mathematical example, although it arose in the context of isotropy groups of bilinear algorithms ##[25]##. This is true in so far as indeed ##\mathfrak{A}(\mathfrak{g})=\{0\}## for all semisimple Lie algebras ##\mathfrak{g}## (including ##\mathfrak{sl}(2,\mathbb{R})##) which play a major role in the standard model of quantum field theory. This can be shown with a bit more effort by extensive use of their root systems simply by solving the corresponding linear equation system. However, Heisenberg algebras and the Poincaré algebra do have non-trivial antisymmetric and invariant Lie algebras of dimension greater than one.  In a sense the antisymmetric Lie Algebra ##\mathfrak{A}(\mathfrak{g})## is an indicator for the complexity of the Lie algebra structure of ##\mathfrak{g}\, : \,\mathfrak{A}(\mathfrak{g})=\{0\}## for semisimple Lie algebras, ##\mathfrak{g}##, ##\mathfrak{A}(\mathfrak{g})\cong \mathfrak{g}## for the Lie algebras consisting of only the first matrix row, and ##\mathfrak{A}(\mathfrak{g})=\mathfrak{gl}(\mathfrak{g})## for Abelian Lie algebras ##\mathfrak{g}##.

Sources

[1] P.J. Olver: Applications of Lie Groups to Differential Equations

https://www.amazon.com/Applications-Differential-Equations-Graduate-Mathematics/dp/0387950001

[2] V.S. Varadarajan: Lie Groups, Lie Algebras, and Their Representation

https://www.amazon.com/Groups-Algebras-Representation-Graduate-Mathematics/dp/0387909699/

[3] H. Holmann, H. Rummler: Alternierende Differentialformen

https://www.amazon.com/Alternierende-Differentialformen-German-Holmann/dp/3860258613

[4] H. Kraft: Geometrische Methoden in der Invariantentheorie

https://www.amazon.com/Geometrische-Methoden-Invariantentheorie-Aspects-Mathematics/dp/3528085258/

[5] D. Vogan: Classical Groups

http://www-math.mit.edu/~dav/classicalgroups.pdf

[6] J.E. Humphreys: Introduction to Lie Algebras and Representation Theory

https://www.amazon.com/Introduction-Algebras-Representation-Graduate-Mathematics/dp/0387900535/

[7] C. Blair: Representations of ##\mathfrak{su}(2)##

http://www.maths.tcd.ie/~cblair/notes/su2.pdf

[8] D.W. Lyons: An Elementary Introduction to the Hopf Fibration

https://nilesjohnson.net/hopf-articles/Lyons_Elem-intro-Hopf-fibration.pdf

[9] R. Feres: Mathe 407 – Homework Set 6 – Solutions

http://www.math.wustl.edu/~feres/Math407SP15/Math407SP15HW06Sol.pdf

[10] L. Connellan: Spheres, Hyperspheres and Quaternions

https://www2.warwick.ac.uk/fac/sci/masdoc/people/studentpages/students2014/connellan/spheres.pdf

[11] T. Brzezinski, L. Dabrowski, B. Zielinski: Hopf fibration and monopole connection over the contact quantum spheres

https://arxiv.org/pdf/math/0301123.pdf

[12] T. Neukirchner: Veranschaulichung der Hopf-Faserung $\mathbb{S}^3 / \mathbb{S}^1 \simeq \mathbb{S}^2$

https://www.math.hu-berlin.de/~neukirch/Hopf-Faserung.pdf

[13] I. Markina: Principle bundles and Hopf map for spheres

http://www.uni-math.gwdg.de/amp/Markina3.pdf

[14] K. Schöbel: Faserbündel (Vorlesung)

http://users.minet.uni-jena.de/~schoebel/Faserbündel.pdf

[15] Jean Dieudonné: Geschichte der Mathematik 1700-1900, Vieweg Verlag 1985

[16] Representations and Why Precision is Important

https://www.physicsforums.com/insights/representations-precision-important/

[17] The Pantheon of Derivatives – Part II

https://www.physicsforums.com/insights/pantheon-derivatives-part-ii/

[18] The Pantheon of Derivatives – Part III

https://www.physicsforums.com/insights/pantheon-derivatives-part-iii/

[19] The Pantheon of Derivatives – Part IV

https://www.physicsforums.com/insights/pantheon-derivatives-part-iv/

[20] The nLab

https://ncatlab.org/nlab/show/HomePage

[21] Wikipedia (English)

https://en.wikipedia.org/wiki/Main_Page

[22] Wikipedia (Deutsch)

https://de.wikipedia.org/wiki/Wikipedia:Hauptseite

[23] Niles Johnson (image source)

https://nilesjohnson.net/

https://commons.wikimedia.org/wiki/File%3AHopf_Fibration.png

[24] H.F. de Groote: Lectures on the Complexity of Bilinear Problems

https://www.amazon.com/Lectures-Complexity-Bilinear-Problems-Jan-1987/dp/B010BDZWVC

 

 

3 replies
  1. dextercioby says:

    It would have helped to describe in maximum 2 paragraphs the connection between a local Lie group and a global Lie group and from here the connection between the notions of globally isomorphic Lie groups and locally isomorphically Lie groups. Physicists usually gloss over these important definitions and theorems.
    :)

  2. fresh_42 says:

    Thank you for the detailed review, @lavinia.

    You are absolutely right, that the initial example and the spheres feel distracting. It disturbed me, too. The reason is, that I originally wanted to focus on vector fields instead of the group. I began by noting, that there is this general vision of vectors attached to points on one hand and the abstract formulas on the other. I thought some examples with actual curves (flows, 1-parameter groups), groups and specific functions would be helpful, as they are often banned to exercises or get lost in the "bigger" theory. That's where those two paragraphs came from. As I looked closer into the example of SU(2) I got more and more involved with it instead of my original purpose vector fields.

    So the actual distraction had been SU(2). To be honest, I wanted to understand connections better, esp. Ehresmann and Levi-Civita and hope to deal with it (on the example of SU(2) again) in a third part. So the two parts so far are more of a "what has happened before" part of the story. But the more I've read about SU(2), the more I found it interesting. I kept the distracting parts, as I recognized, that they are a good to quote or a copy & paste source for answers on PF. Up to now, I used the various notations of derivatives as well as the stereographic projection in an answer to a thread here. And as one-parameter groups are essential to the theory, I kept this part. And why not have a list of spheres of small dimensions, when one of them is meant to be the primary example of actual calculations? That's basically the reason for the felt (by you and by me) inhomogeneous structure and why the article is a bit of a collection of formulas.

    So thanks, again, and I'll see if I can add a couple of explanations which you suggested.

  3. lavinia says:

    These notes would be helpful for a student who is learning about Lie groups because it works through a specific example – the example of ##SU(2,C)##.

    The student would have to master the Lie group technology in a different place.

    I especially like the way the Hopf fibration is worked out.

    The introductory section on spheres is not specific to ##SU(2,C)## so for me personally it was distracting. I also found the initial example of a local Lie group distracting.

    The first paragraph of Part 1 says that it hopes to pique interest in Lie group mathematics. For this, some comment on why representations are important/ interesting – in mathematics – would have helped.

    There is a lot of calculation here and some people might like an intuitive beacon to light the way through the journey.

Leave a Reply

Want to join the discussion?
Feel free to contribute!

Leave a Reply