# I Adjoint representation and the generators

1. Apr 14, 2016

### CAF123

Given that $g T_a g^{-1} = D^b_a T_b$ one can show that the generators in the adjoint representation of a group $G$ are the structure constants of the lie algebra satisfied by the $T_a$.

Write $g$ infinitesimal, so that $g = 1 + \mathrm {i} \alpha^a T_a$ and $D^c_a = \delta^c_a + i \alpha_b (A_b)^c_a\,\,\,(*)$ Then $$gT_ag^{-1} = T_a + i \alpha^b (T_a T_b-T_bT_a) = T_c (\delta^c_a + i \alpha^b i c^c_{ab})$$ Comparing with $(*)$ we see that $(A_b)^c_a = i c^c_{ba}$ as required.

While the manipulations are clear, I am a bit confused as to what all the components mean -

1) Is $g$ here a representation of a group element $g \in G$ or a group element itself? Since it is expanded in terms of the lie algebra I think it could be either with the generators $T_a$ in the appropriate representation?

2) $D^c_a$ is the adjoint representation of a group element so for $SU(N)$ for example would be a $N^2-1 \times N^2-1$ matrix. On the rhs of my equation, $g T_a g^{-1} = D^b_a T_b$, this is acting on $T_b$ so would this not mean that $T_b$ is also in the adjoint representation to make the matrix multiplication make sense and therefore that $g$ itself is also an adjoint representation of $G$?

What is the fault in this reasoning?

Thanks!

2. Apr 14, 2016

### samalkhaiat

In general, we take $g$ to be an arbitrary group element. But you can also choose it in some representation.

Take the matrix element of both side in some matrix representation:
$$(gT_{a}g^{-1})_{ij} = D_{a}{}^{b} (T_{b})_{ij}$$

3. Apr 15, 2016

### CAF123

Hi Sam,
I see. For groups which have elements as matrices (e.g O(N), SO(N), SU(N)) can these matrices correspond identically to the representation of the group elements themselves? I suppose by definition of a representation they can.

Ah right - so on the rhs, the contraction is in index b, which always makes sense because $D_a^b$ labels the components of the $\text{dim} G \times \text{dim} G$ matrices of the adjoint representation which is contracted with $T_b$, and b runs over the number of generators of which there are $\text{dim} G$.

4. Apr 15, 2016

### samalkhaiat

If there exists a homomorphism from an abstract Lie group $G$ onto a group of $n \times n$ matrices $M(g)$, then this group of matrices forms a n-dimensional matrix representation of $G$. The group composition law will then be realized by matrix multiplication: $M(g)M(h) = M(g \cdot h)$.
If, however, the mapping $g \to M(g)$ is 1-to-1 (i.e., an isomorphism), then the matrix representation is faithful, and we speak of matrix Lie group. In this case, you can forget about the abstract nature of the group and its elements.
For Lie algebras, you are in better shape because, every finite-dimensional Lie algebra has faithful matrix representation. So, apart from global topological issues, a matrix Lie group can be derived by exponentiating the matrix representation of its Lie algebra.
It is better to understand where the matrix $D(g)$ came from and what it does.
Any group has natural action on itself by conjugation. The conjugation of the group $G$ by a single element $g$ is the map
$$h \to \bar{h} = \pi_{g}(h) = g \cdot h \cdot g^{-1}, \ \ \ \forall h \in G .$$ Clearly, conjugation preserves the group composition law
$$\pi_{g}(h_{1} \cdot h_{2}) = \pi_{g}(h_{1}) \cdot \pi_{g}(h_{2}) .$$
Lie groups are manifolds with local coordinates (the group parameters). In a connected Lie group, the group elements are represented by the curve $g(\alpha)$ with $g(0) = e$ being the origin of the coordinates. For such groups the operation of conjugation preserves the neighbourhood of the identity: $h(\alpha) \to e$ implies, by continuity, that $g h(\alpha) g^{-1} \to e$. This means that, close to the origin, conjugation corresponds to a linear transformation $\bar{\epsilon}^{a} = \epsilon^{b}D_{b}{}^{a}(g)$ in the parameter space: $$h(\epsilon) \to \bar{h}(\bar{\epsilon}) = h \left( D(g) \epsilon \right) .$$
Now, if we use the exponential map $h(\alpha) = \exp (i \alpha^{a} T_{a})$ and the identity
$$g \ e^{X} \ g^{-1} = e^{g X g^{-1}},$$ we find $$\exp \left( i \epsilon^{a} D_{a}{}^{b}(g) T_{b}\right) = \exp \left( i \epsilon^{a} g T_{a} g^{-1}\right) .$$ Clearly, this leads to $$g \ T_{a} \ g^{-1} = D_{a}{}^{b}(g) \ T_{b} . \ \ \ \ \ (1)$$ Using this you can easily show that $D(g)$ is a representation of $G$: $$D_{a}{}^{b}(e) = \delta_{a}^{b} , \ \ \ D_{a}{}^{b}(g_{1}g_{2}) = D_{a}{}^{c}(g_{1}) \ D_{c}{}^{b}(g_{2}) . \ \ \ \ (2)$$ Thus, as it is clear from (2) and (1), the map $g \to D(g) \equiv \mbox{Ad}(g)$ gives a representation of $G$ on its Lie algebra $\mathcal{L}(G) \cong T_{e}G$.

5. Apr 18, 2016

### CAF123

Ok thanks for the explanation!

While we are discussing the adjoint representation, I posted another thread in the homework help forums which received no attention: https://www.physicsforums.com/threa...trix-with-pauli-matrices.866523/#post-5445090

I have however made more progress with it - but again I just wanted to clarify a few conceptual things. Given the transformation for $\vec \phi = (\phi_1, \phi_2, \phi_3)$ we see that it transforms under the adjoint representation. Does this tell us anything about what the $\phi_i$ are? The question says that $\sigma$ is hermitean and traceless - the Pauli matrices satisfy these criteria but what about $\phi$? What happens if the $\phi_i$ are complex?

Also, in showing that $\text{det} \sigma' = \text{det} \sigma$, then $$\text{det} \sigma' = \text{det} \sigma - \text{det} (\vec \alpha \cdot ( \vec \tau \times \vec \phi))$$ Why is it obvious that the last term is zero?

Thanks!

6. Apr 18, 2016

### samalkhaiat

Maybe the question was too easy for homework helpers!! Any way, you should have no problem solving it. You have 3 fields $\varphi^{a}$ transforming in the adjoint representation $$\bar{\varphi}^{a} = \varphi^{b} D_{b}{}^{a}(g(\alpha)) .$$ Contracting both sides with $\tau_{a}$, gives you $$\bar{\varphi}^{a} \tau_{a}= \varphi^{b} \ D_{b}{}^{a} (g(\alpha)) \tau_{a} .$$ Now, if you use the definition of the adjoint representation (which we have been talking about in this thread)
$$U(g) \ \tau_{b} \ U^{-1}(g) = D_{b}{}^{a}(g(\alpha) ) \ \tau_{a} ,$$ you obtain $$\bar{\varphi}^{a}\tau_{a} = \varphi^{b} \left( U \tau_{b} U^{-1} \right) = U \left( \varphi^{b}\tau_{b}\right) U^{-1} .$$ This tells you that the (hermitian) matrix $\Sigma \equiv \varphi^{a}\tau_{a}$ also transforms in the adjoint representation $$\bar{\Sigma} = U(g) \ \Sigma \ U^{\dagger}(g) , \ \ \ \ U^{-1} = U^{\dagger} .$$

Yes, it tells you that $\varphi_{a}$ represents 3 real fields.

If $\varphi_{a}$ were complex fields, the matrix $\Sigma = \vec{\varphi} \cdot \vec{\tau}$ would not be hermitian, and 3 complex fields are far too many for the adjoint representation of $SU(2)$. The appropriate fields for the adjoint representation of $SU(n)$ are either $n^{2}-1$ real fields or $n \times n$ hermitian-matrix-field. This should be clear to you from $\{n\} \otimes \{n\}^{*} = \{n^{2} - 1\} \oplus \{1\} \Rightarrow \{n^{2}-1\}^{*} = \{n^{2}-1\}$.
The determinant is not a linear map $\mbox{Det} (A + B) \neq \mbox{Det}(A) + \mbox{Det}(B)$. If you use $\mbox{Det}(AB) = \mbox{Det}(A) \ \mbox{Det}(B)$, you get
$$\mbox{Det}(\bar{\Sigma}) = \mbox{Det}(U) \ \mbox{Det}(U^{-1}) \ \mbox{Det}(\Sigma) = \mbox{Det}(\Sigma) .$$ Now, for $SU(2)$ $$- \mbox{Det}(\Sigma) = \vec{\varphi} \cdot \vec{\varphi} = (\varphi^{1})^{2} + (\varphi^{2})^{2} + (\varphi^{3})^{2} .$$ Thus, the invariance of the determinate means that the transformation $\varphi^{a} \to \bar{\varphi}^{a}$ leaves the length $|\vec{\varphi}|$ unchanged. Thus, it must be a rotation. This is of course expected from the relation $SO(3) \cong SU(2) / Z_{2}$, which means that there is a one-to-one correspondence between the adjoint representation of $SU(2)$ and the fundamental vector representation of $SO(3)$.

7. Apr 19, 2016

### CAF123

There is an SU(2) triplet consisting of the pi mesons that transforms under the adjoint representation, namely $(\pi^+, \pi^0, \pi^-)$. The components are all scalars but $\pi^+$ and $\pi^-$ are complex scalars - so the triplet consists of two complex fields and one real one. Under hermitian conjugation, is it right to say that this triplet transforms from $(\pi^+ \pi^0 \pi^-) \rightarrow (\pi^-, \pi^0, \pi^+)^T$ which consists of the same components but different placing of them within the mutiplet?

So this shows that the adjoint representation is always real - but in showing this did you assume, say for n=3 that $(\mathbf 3 \otimes \mathbf 3^*)^* = (\mathbf 3^* \otimes \mathbf 3) \overset{!}{=} \mathbf 3 \otimes \mathbf 3^*$?

I see, I think I proved the latter statement by writing out the infinitesimal form of the right hand side of $U \Sigma U^{\dagger}$ and seeing that it implies that the difference in $\phi$ and $\phi'$ is a cross product which geometrically represents an infinitesimal rotation.

Thanks!

8. Apr 19, 2016

### samalkhaiat

Let me repeat what I said. Fields transforming in the adjoint representation of $SU(n)$ consist of either $n^{2}-1$ real fields, $\varphi^{a}$, or (equivalently) $n \times n$ traceless hermitian matrix formed from them, $T_{a}\varphi^{a}$. This should make it clear to you that the Pions are just the normalized matrix elements of $\Sigma$: $\Pi_{\alpha}{}^{\beta} \equiv \frac{1}{\sqrt{2}} \Sigma_{\alpha}{}^{\beta} = \frac{1}{\sqrt{2}} (\tau_{a})_{\alpha}{}^{\beta} \varphi^{a}$. Explicitly,
$$\Pi_{\alpha}{}^{\beta} = \frac{1}{\sqrt{2}} \begin{pmatrix} \varphi^{3} & \varphi^{1} - i \varphi^{2} \\ \varphi^{1} + i \varphi^{2} & - \varphi^{3} \end{pmatrix} \equiv \begin{pmatrix} \pi^{0}/ \sqrt{2} & \pi^{+} \\ \pi^{-} & - \pi^{0}/ \sqrt{2} \end{pmatrix} ,$$ with $$\Pi_{\alpha}{}^{\beta} \Pi_{\beta}{}^{\alpha} = \varphi^{a}\varphi_{a} , \ \ a = 1,2,3 . \ \ \alpha , \beta = 1,2 .$$ And, in terms of the quark doublet $q_{\alpha} \sim [2]$: $q_{1} = u$, $q_{2}=d$, you can write
$$\Pi_{\alpha}{}^{\beta} = q_{\alpha}\bar{q}^{\beta} - \frac{1}{2}\delta^{\beta}_{\alpha} (q_{\eta}\bar{q}^{\eta}) .$$ And, furthermore, if you introduce the ladder matrices, $$I^{\pm} = \frac{1}{2} (\tau_{1} \pm \tau_{2}) , \ \ \ I^{3} = \frac{1}{\sqrt{2}} \tau_{3} ,$$ you can rewrite the Pion matrix as $\Pi = I^{+}\pi^{+} + I^{3}\pi^{0} + I^{-}\pi^{-}$. You should familiarise yourself with these expressions.
Yes, it is. If $R$ is a real representation, what does that imply for the corresponding generators $T^{(R)}_{a}$?
Why is there a question mark? That is an equation between vector spaces.

9. Apr 20, 2016

### CAF123

I see, thanks. Why is it that 'fields transforming in the adjoint representation of $SU(n)$ consist of either $n^{2}-1$ real fields, $\varphi^{a}$, or (equivalently) $n \times n$ traceless hermitian matrix formed from them?' Why are these equivalent for example?

What I'm also not understanding at the moment is what is the difference between the SU(2) isospin triplet $(\pi^+, \pi^0, \pi^-)$ and the matrix $\Pi$ written out explicitly with the pi's as the matrix elements? From what I understood before, this triplet transforms under the $\mathbf 3$ rep of SU(2) (i.e the adjoint representation) because the triplet could be thought of as a 3x1 vector which was acted on naturally by 3x3 adjoint representation matrix? But the triplet is not consisting of 3 real fields $(\phi_1. \phi_2. \phi_3)$ for the case of the pions but instead the fields \phi_i can form a 2x2 matrix. Is it obvious that $T_{a}\varphi^{a}$ had to give this 2x2 matrix form identifiable with the pions? I guess I'm just trying to connect it all together.

Let R be a real representation. Then $R = \text{Id}_R + i\alpha^a (T^{(R)}_a)$. R is real so it is equal to its conjugate therefore $$(1+i\alpha^a T_a) = (1-i\alpha T_a^*).$$ This implies $T_a^{{(R)}\,*} = -T^{(R)}_a$.

10. Apr 20, 2016

### samalkhaiat

Your question contains the answer. So, I’m going to repeat, for the third time, what I have said: The adjoint representation is a vector space whose dimension is given by the real number $n^{2}-1$. So, in the canonical (Cartesian) bases, you need $n^{2}-1$ real coordinates to specify the elements of this vector space. This can be achieved either by an iso-vector $\vec{\varphi}$ with $n^{2}-1$ real (Cartesian) components, or equivalently by $n \times n$ trace-less hermitian matrix. How many independent real parameters does such matrix have?
Your homework question meant to teach you this fact. They both transform in the adjoint representation. They, therefore, are members of the same vector space. And, they both contains $n^{2}-1$ real independent parameters.
Didn’t I write $\Pi = I^{+} \pi^{+} + I^{3}\pi^{0} + I^{-}\pi^{-}$ for you? This tells you that $\pi^{\pm , 0}$ are the components of the adjoint “vector” $\Pi$ in the complexified bases $I^{\pm , 3}$.
Similar to the above $\Pi = I^{+} \pi^{+} + I^{3}\pi^{0} + I^{-}\pi^{-}$, you can think of $\pi^{\pm , 0}$ being the spherical (harmonic) coordinates of the iso-vector $\vec{\varphi}$ in the bases $$I_{x} = \frac{1}{\sqrt{2}} \begin{pmatrix} 0 & 1 & 0 \\ 1 & 0 & 1 \\ 0 & 1 & 0 \end{pmatrix} , \ \ I_{y} = \frac{-1}{\sqrt{2}} \begin{pmatrix} 0 & i & 0 \\ i & 0 & i \\ 0 & i & 0 \end{pmatrix} ,$$ and $$I_{z} = \begin{pmatrix} 1 & 0 & 0 \\ 0 & 0 & 0 \\ 0 & 0 & -1 \end{pmatrix} , \ \ \ I_{x}^{2} + I_{y}^{2} + I_{z}^{2} = 1(1+1) \ \mbox{id} ,$$ while $\varphi_{a} , \ a = 1,2,3$ are the Cartesian coordinates of the iso-vector $\vec{\varphi}$ in the canonical bases $(T_{a})_{bc} = - i \epsilon_{abc}$: $$T_{1} = i \begin{pmatrix} 0 & 0 & 0 \\ 0 & 0 & -1 \\ 0 & 1 & 0 \end{pmatrix} , \ \ T_{2} = i \begin{pmatrix} 0 & 0 & 1 \\ 0 & 0 & 0 \\ -1 & 0 & 0 \end{pmatrix} ,$$ and $$T_{3} = i \begin{pmatrix} 0 & -1 & 0 \\ 1 & 0 & 0 \\ 0 & 0 & 0 \end{pmatrix} , \ \ \ T_{1}^{2} + T_{2}^{2} + T_{3}^{2} = 1(1+1) \ \mbox{id} .$$ Both sets of bases satisfy the Lie algebra of $SU(2)$ in the Iso-spin = 1 representation. So both set of components must be related by unitary transformation $$\begin{pmatrix} \pi^{+} \\ \pi^{0} \\ \pi^{-} \end{pmatrix} = \frac{1}{\sqrt{2}} \begin{pmatrix} 1 & -i & 0 \\ 0 & 0 & \sqrt{2} \\ 1 & i & 0 \end{pmatrix} \begin{pmatrix} \varphi^{1} \\ \varphi^{2} \\ \varphi^{3} \end{pmatrix} .$$ Compare this with the relation between the $l = 1$-harmonics $Y_{1m}$ and the Cartesian coordinates in $\mathbb{R}^{3}$
$$\begin{pmatrix} Y_{1,1} \\ Y_{1,0} \\ Y_{1,-1} \end{pmatrix} = \frac{1}{\sqrt{2}} \begin{pmatrix} 1 & -i & 0 \\ 0 & 0 & \sqrt{2} \\ 1 & i & 0 \end{pmatrix} \begin{pmatrix} x \\ y \\ z \end{pmatrix} .$$
Of course we don’t know why nature chose to realize the Pions but not the “Phions”. However, we do know that the reason must be related to the fact that the Cartesian components $\varphi^{a}$ are not charged eigen-states of the charge operator $Q = e I_{z}$, whereas the Pions are.
Correct!! So you knew that the adjoint representation is a real representation.

11. Apr 21, 2016

### CAF123

Ok, so $$\begin{pmatrix} \phi_1' \\ \phi_2' \\ \phi_3' \end{pmatrix} = \exp(-i \alpha^c (T_c)) \begin{pmatrix} \phi_1 \\ \phi_2 \\ \phi_3 \end{pmatrix}$$ so that infinitesimally $$\delta \Phi_a' = -i \alpha^c (T_c)_{ab}\Phi_b$$ Using the transformation provided in your last post, then $$\delta \pi'_a = \frac{1}{\sqrt{2}} \begin{pmatrix} 1 & -i & 0 \\ 0 & 0 & \sqrt{2} \\ 1 & i & 0 \end{pmatrix} \delta \Phi_a'$$ I can sub in for the expression for $\delta \Phi_a'$ to get this 3x3 matrix multiplying a sum of three 3x1 matrices. Using again the definition provided and inverting it to give $\Phi$ in terms of the pi's gives the transformation of the triplet of pion fields.

Is there a relation between the ladder operators $I_{\pm}$ and the $I_{x,y,z}$? The former are 2x2 and the latter 3x3 but I just wondered if they both represent isospin? The $I_{z}$ has eigenvalues -1,0,1 which correspond to the 3rd components of isospin for a isospin triplet.