Fundamental representation and adjoint representation

  • #1
shinobi20
267
19
TL;DR Summary
The fundamental and adjoint are different representations of the algebra, but I would like to see how they're different through an explicit example, say ##SO(4)##. I'm using the group theory book by A. Zee and I'd like to clarify some issues with the concepts through the calculations as presented by the author.
I have some clarifications on the discussion of adjoint representation in Group Theory by A. Zee, specifically section IV.1 (beware of some minor typos like negative signs).


An antisymmetric tensor ##T^{ij}## with indices ##i,j = 1, \ldots,N## in the fundamental representation is ##N##-dimensional. On the other hand, we also know that ##T^{ij}## in ##SO(N)## furnishes a ##\frac{1}{2}N(N-1)##-dimensional irreducible representation. In addition, the number of generators in ##SO(N)## is ##\frac{1}{2} N(N-1)## with the generators being ##N##-dimensional in the fundamental representation.

Punchline: We can also regard ##T^{ij}## as an ##N \times N## matrix which can be written as linear combinations of the generators ##\mathcal{J}_a## with coefficients ##A_a## where ##a = 1, \ldots \frac{1}{2}N(N-1)##.

$$\begin{equation}
T^{ij} = \sum_{a=1}^{\frac{1}{2}N(N-1)} A_a \mathcal{J}_a^{ij}\tag{1}
\end{equation}$$

where for the generators ##\mathcal{J}_a^{ij}##, the index ##a## tells us which generator we are talking about and the indices ##(ij)## indicate the matrix element of a given generator.

It's also discussed that the structure constant ##f_{abc}## furnishes the adjoint representation given by,

$$\begin{equation}
(J_{a})_{bc} = -i f_{abc}\tag{2}
\end{equation}$$

where ##J = -i \mathcal{J}## in the physicist's notation, i.e. Hermitian generators, so that ##(\mathcal{J}_a)_{bc} = f_{abc}##.

Since ##a,b,c = 1, \ldots \frac{1}{2}N(N-1)##, the adjoint representation is ##\frac{1}{2}N(N-1)##-dimensional.

Question 1. I want to clarify (due to the wording of Zee that "We can also regard ##T^{ij}## as an ##N \times N## matrix ...") if the point here is to compare and contrast the fundamental representation which is ##N##-dimensional VERSUS adjoint representation which is ##\frac{1}{2}N(N-1)##-dimensional?

Question 2. An antisymmetric tensor ##T^{ij}## in the fundamental representation is ##N##-dimensional, so it acts on ##N##-dimensional vectors. On the other hand, we can express ##T^{ij}## as a linear combination of ##\mathcal{J}_a##s with ##\frac{1}{2}N(N-1)## coefficients ##A_a## so that we can form an ##\frac{1}{2}N(N-1)##-dimensional vector whose components are the ##A_a##s in the ##\mathcal{J}_a## basis. Is this correct?

Question 3. How do we form this vector with components ##A_a##?

I refuse to use SO(3) since it can't demonstrate clearly the difference between the fundamental and adjoint since for ##N=3##, ##N = \frac{1}{2}N(N-1)##. So, for ##N=4##,

Fundamental representation
$$T^{ij} = \begin{bmatrix} 0 & T^{12} & T^{13} & T^{14} \\\ x & 0 & T^{23} & T^{24} \\\ x & x & 0 & T^{34} \\\ x & x & x & 0 \end{bmatrix}\tag{3}$$

where I only wrote the ##\frac{1}{2}N(N-1) = 6## independent components. This implies that the vector is ##4##-dimensional in the fundamental representation.

Adjoint representation

$$\begin{equation}
T^{12} = \sum_{a=1}^{6} A_a \mathcal{J}_a^{12} = A_1 \mathcal{J}_1^{12} + A_2 \mathcal{J}_2^{12} + A_3 \mathcal{J}_3^{12} + A_4 \mathcal{J}_4^{12} + A_5 \mathcal{J}_5^{12} + A_6 \mathcal{J}_6^{12}\tag{4}
\end{equation}$$

$$\begin{bmatrix} T^{12} \\\ T^{13} \\\ T^{14} \\\ T^{23} \\\ T^{24} \\\ T^{34} \end{bmatrix} = \begin{bmatrix} \mathcal{J}_1^{12} & \mathcal{J}_2^{12} & \mathcal{J}_3^{12} & \mathcal{J}_4^{12} & \mathcal{J}_5^{12} & \mathcal{J}_6^{12} \\\ \mathcal{J}_1^{13} & \mathcal{J}_2^{13} & \mathcal{J}_3^{13} & \mathcal{J}_4^{13} & \mathcal{J}_5^{13} & \mathcal{J}_6^{13} \\\ \mathcal{J}_1^{14} & \mathcal{J}_2^{14} & \mathcal{J}_3^{14} & \mathcal{J}_4^{14} & \mathcal{J}_5^{14} & \mathcal{J}_6^{14} \\\ \mathcal{J}_1^{23} & \mathcal{J}_2^{23} & \mathcal{J}_3^{23} & \mathcal{J}_4^{23} & \mathcal{J}_5^{23} & \mathcal{J}_6^{23} \\\ \mathcal{J}_1^{24} & \mathcal{J}_2^{24} & \mathcal{J}_3^{24} & \mathcal{J}_4^{24} & \mathcal{J}_5^{24} & \mathcal{J}_6^{24} \\\ \mathcal{J}_1^{34} & \mathcal{J}_2^{34} & \mathcal{J}_3^{34} & \mathcal{J}_4^{34} & \mathcal{J}_5^{34} & \mathcal{J}_6^{34} \end{bmatrix} \begin{bmatrix} A_1 \\\ A_2 \\\ A_3 \\\ A_4 \\\ A_5 \\\ A_6 \end{bmatrix}\tag{5}$$

This implies that the vector is ##6##-dimensional in the adjoint representation. Thus, ##T^{ij}## and ##A_a## are the same thing expressed in different bases.

Am I correct that this is what is meant?

Question 4. Regardless of whether the construction in question 3 is correct or not, there seems to be something wrong with how eq.##(1)## is done for ##N=4##. In eq.##(1)##, ##\mathcal{J}_a^{ij}## has indices ##a## which runs until ##\frac{1}{2}N(N-1)## while ##(ij)## runs until ##N##. To interpret the ##6##-dimensional matrix in question 3 as ##(\mathcal{J}_a)_{bc} = f_{abc}## where ##(bc)## are the indices that indicate the matrix components and acts like ##(ij)## in eq.##(1)##, there would be an issue since ##(bc)## should run until ##\frac{1}{2}N(N-1)##. Of course, for ##N=3## there's no issue since ##N=\frac{1}{2}N(N-1)## but that's a coincidence. So it seems like the matrix construction in question 3 may be wrong?
12

Moderator's Note: This has been moved from 'Linear Algebra' to 'Quantum Mechanics' for
  • it received no answers in 'LA'.
  • is clearly phrased in the language of physics. Wide parts of it don't make mathematical sense. E.g. it is not clear whether we talk about group representations, in which case tensors don't make much sense, or Lie algebra representations, in which case tensors should be specifically defined to avoid confusion. Either form of representation is a homomorphism in the corresponding category, not a tensor.
  • the user provided a link as a possible answer to that specific use of language, but I had to remove it for copyright reasons.
 
Last edited by a moderator:
  • Like
Likes jbergman
Physics news on Phys.org
  • #2
shinobi20 said:
TL;DR Summary: The fundamental and adjoint are different representations of the algebra, but I would like to see how they're different through an explicit example, say ##SO(4)##. I'm using the group theory book by A. Zee and I'd like to clarify some issues with the concepts through the calculations as presented by the author.

I have some clarifications on the discussion of adjoint representation in Group Theory by A. Zee, specifically section IV.1 (beware of some minor typos like negative signs).


An antisymmetric tensor ##T^{ij}## with indices ##i,j = 1, \ldots,N## in the fundamental representation is ##N##-dimensional. On the other hand, we also know that ##T^{ij}## in ##SO(N)## furnishes a ##\frac{1}{2}N(N-1)##-dimensional irreducible representation. In addition, the number of generators in ##SO(N)## is ##\frac{1}{2} N(N-1)## with the generators being ##N##-dimensional in the fundamental representation.

Punchline: We can also regard ##T^{ij}## as an ##N \times N## matrix which can be written as linear combinations of the generators ##\mathcal{J}_a## with coefficients ##A_a## where ##a = 1, \ldots \frac{1}{2}N(N-1)##.

$$\begin{equation}
T^{ij} = \sum_{a=1}^{\frac{1}{2}N(N-1)} A_a \mathcal{J}_a^{ij}\tag{1}
\end{equation}$$

where for the generators ##\mathcal{J}_a^{ij}##, the index ##a## tells us which generator we are talking about and the indices ##(ij)## indicate the matrix element of a given generator.

It's also discussed that the structure constant ##f_{abc}## furnishes the adjoint representation given by,

$$\begin{equation}
(J_{a})_{bc} = -i f_{abc}\tag{2}
\end{equation}$$

where ##J = -i \mathcal{J}## in the physicist's notation, i.e. Hermitian generators, so that ##(\mathcal{J}_a)_{bc} = f_{abc}##.

Since ##a,b,c = 1, \ldots \frac{1}{2}N(N-1)##, the adjoint representation is ##\frac{1}{2}N(N-1)##-dimensional.

Question 1. I want to clarify (due to the wording of Zee that "We can also regard ##T^{ij}## as an ##N \times N## matrix ...") if the point here is to compare and contrast the fundamental representation which is ##N##-dimensional VERSUS adjoint representation which is ##\frac{1}{2}N(N-1)##-dimensional?

Question 2. An antisymmetric tensor ##T^{ij}## in the fundamental representation is ##N##-dimensional, so it acts on ##N##-dimensional vectors. On the other hand, we can express ##T^{ij}## as a linear combination of ##\mathcal{J}_a##s with ##\frac{1}{2}N(N-1)## coefficients ##A_a## so that we can form an ##\frac{1}{2}N(N-1)##-dimensional vector whose components are the ##A_a##s in the ##\mathcal{J}_a## basis. Is this correct?

Question 3. How do we form this vector with components ##A_a##?

I refuse to use SO(3) since it can't demonstrate clearly the difference between the fundamental and adjoint since for ##N=3##, ##N = \frac{1}{2}N(N-1)##. So, for ##N=4##,

Fundamental representation
$$T^{ij} = \begin{bmatrix} 0 & T^{12} & T^{13} & T^{14} \\\ x & 0 & T^{23} & T^{24} \\\ x & x & 0 & T^{34} \\\ x & x & x & 0 \end{bmatrix}\tag{3}$$

where I only wrote the ##\frac{1}{2}N(N-1) = 6## independent components. This implies that the vector is ##4##-dimensional in the fundamental representation.

Adjoint representation

$$\begin{equation}
T^{12} = \sum_{a=1}^{6} A_a \mathcal{J}_a^{12} = A_1 \mathcal{J}_1^{12} + A_2 \mathcal{J}_2^{12} + A_3 \mathcal{J}_3^{12} + A_4 \mathcal{J}_4^{12} + A_5 \mathcal{J}_5^{12} + A_6 \mathcal{J}_6^{12}\tag{4}
\end{equation}$$

$$\begin{bmatrix} T^{12} \\\ T^{13} \\\ T^{14} \\\ T^{23} \\\ T^{24} \\\ T^{34} \end{bmatrix} = \begin{bmatrix} \mathcal{J}_1^{12} & \mathcal{J}_2^{12} & \mathcal{J}_3^{12} & \mathcal{J}_4^{12} & \mathcal{J}_5^{12} & \mathcal{J}_6^{12} \\\ \mathcal{J}_1^{13} & \mathcal{J}_2^{13} & \mathcal{J}_3^{13} & \mathcal{J}_4^{13} & \mathcal{J}_5^{13} & \mathcal{J}_6^{13} \\\ \mathcal{J}_1^{14} & \mathcal{J}_2^{14} & \mathcal{J}_3^{14} & \mathcal{J}_4^{14} & \mathcal{J}_5^{14} & \mathcal{J}_6^{14} \\\ \mathcal{J}_1^{23} & \mathcal{J}_2^{23} & \mathcal{J}_3^{23} & \mathcal{J}_4^{23} & \mathcal{J}_5^{23} & \mathcal{J}_6^{23} \\\ \mathcal{J}_1^{24} & \mathcal{J}_2^{24} & \mathcal{J}_3^{24} & \mathcal{J}_4^{24} & \mathcal{J}_5^{24} & \mathcal{J}_6^{24} \\\ \mathcal{J}_1^{34} & \mathcal{J}_2^{34} & \mathcal{J}_3^{34} & \mathcal{J}_4^{34} & \mathcal{J}_5^{34} & \mathcal{J}_6^{34} \end{bmatrix} \begin{bmatrix} A_1 \\\ A_2 \\\ A_3 \\\ A_4 \\\ A_5 \\\ A_6 \end{bmatrix}\tag{5}$$

This implies that the vector is ##6##-dimensional in the adjoint representation. Thus, ##T^{ij}## and ##A_a## are the same thing expressed in different bases.

Am I correct that this is what is meant?

Question 4. Regardless of whether the construction in question 3 is correct or not, there seems to be something wrong with how eq.##(1)## is done for ##N=4##. In eq.##(1)##, ##\mathcal{J}_a^{ij}## has indices ##a## which runs until ##\frac{1}{2}N(N-1)## while ##(ij)## runs until ##N##. To interpret the ##6##-dimensional matrix in question 3 as ##(\mathcal{J}_a)_{bc} = f_{abc}## where ##(bc)## are the indices that indicate the matrix components and acts like ##(ij)## in eq.##(1)##, there would be an issue since ##(bc)## should run until ##\frac{1}{2}N(N-1)##. Of course, for ##N=3## there's no issue since ##N=\frac{1}{2}N(N-1)## but that's a coincidence. So it seems like the matrix construction in question 3 may be wrong?
12

Moderator's Note: This has been moved from 'Linear Algebra' to 'Quantum Mechanics' for
  • it received no answers in 'LA'.
  • is clearly phrased in the language of physics. Wide parts of it don't make mathematical sense. E.g. it is not clear whether we talk about group representations, in which case tensors don't make much sense, or Lie algebra representations, in which case tensors should be specifically defined to avoid confusion. Either form of representation is a homomorphism in the corresponding category, not a tensor.
  • the user provided a link as a possible answer to that specific use of language, but I had to remove it for copyright reasons.
Can you clarify what Zee means by the fundamental representation? It looks like the defining representation.
 
  • Like
Likes dextercioby
  • #3
Yes, fundamental = defining rep.
 
  • Like
Likes jbergman and dextercioby
  • #4
jbergman said:
Can you clarify what Zee means by the fundamental representation? It looks like the defining representation.
Fundamental = defining.
haushofer said:
Yes, fundamental = defining rep.
Yes, thanks for the response.
 
  • #5
I think mathematicians would call it "natural or just matrix representation". But that wasn't the issue. Is it about
$$
\operatorname{SO}(n) \stackrel{\text{matrix}}{\longrightarrow} \operatorname{GL}(n,\mathbb{R})
$$
versus
$$
\operatorname{SO}(n) \stackrel{\operatorname{Ad}}{\longrightarrow} \operatorname{GL}(\mathfrak{so}(n))
$$
or about
$$
\mathfrak{so}(n) \stackrel{\text{matrix}}{\longrightarrow} \mathfrak{gl}(n,\mathbb{R})
$$
versus
$$
\mathfrak{so}(n) \stackrel{\mathfrak{ad}}{\longrightarrow} \mathfrak{gl}(\mathfrak{so}(n))
$$
and where are the tensors?
 
  • Like
Likes dextercioby
  • #6
fresh_42 said:
I think mathematicians would call it "natural or just matrix representation". But that wasn't the issue. Is it about
$$
\operatorname{SO}(n) \stackrel{\text{matrix}}{\longrightarrow} \operatorname{GL}(n,\mathbb{R})
$$
versus
$$
\operatorname{SO}(n) \stackrel{\operatorname{Ad}}{\longrightarrow} \operatorname{GL}(\mathfrak{so}(n))
$$
or about
$$
\mathfrak{so}(n) \stackrel{\text{matrix}}{\longrightarrow} \mathfrak{gl}(n,\mathbb{R})
$$
versus
$$
\mathfrak{so}(n) \stackrel{\mathfrak{ad}}{\longrightarrow} \mathfrak{gl}(\mathfrak{so}(n))
$$
and where are the tensors?
I believe it is the latter two based on his comment about antisymmetry and what I've seen in other physics literature.
 
  • #7
jbergman said:
I believe it is the latter two based on his comment about antisymmetry and what I've seen in other physics literature.
I think that, too, but besides the word generator the Lie algebra never occurred, it was always the group.

Anyway, I admit I did not want to close the huge language gap between the mathematical wording and the physical wording. That would have taken dozens of posts with unclear endings, so I decided it would be easier to let physicists answer the post who are familiar with this linguistic chaos or have the book at home. And as soon as we get into the tensor stuff and these fancy diagrams with the many boxes (sorry I have forgotten the name), and someone drops the word irreps things will get even more complicated. Better to ask someone with the book.
 
  • #8
shinobi20 said:
Fundamental representation
$$T^{ij} = \begin{bmatrix} 0 & T^{12} & T^{13} & T^{14} \\\ x & 0 & T^{23} & T^{24} \\\ x & x & 0 & T^{34} \\\ x & x & x & 0 \end{bmatrix}\tag{3}$$

where I only wrote the ##\frac{1}{2}N(N-1) = 6## independent components. This implies that the vector is ##4##-dimensional in the fundamental representation.

Adjoint representation

$$\begin{equation}
T^{12} = \sum_{a=1}^{6} A_a \mathcal{J}_a^{12} = A_1 \mathcal{J}_1^{12} + A_2 \mathcal{J}_2^{12} + A_3 \mathcal{J}_3^{12} + A_4 \mathcal{J}_4^{12} + A_5 \mathcal{J}_5^{12} + A_6 \mathcal{J}_6^{12}\tag{4}
\end{equation}$$

$$\begin{bmatrix} T^{12} \\\ T^{13} \\\ T^{14} \\\ T^{23} \\\ T^{24} \\\ T^{34} \end{bmatrix} = \begin{bmatrix} \mathcal{J}_1^{12} & \mathcal{J}_2^{12} & \mathcal{J}_3^{12} & \mathcal{J}_4^{12} & \mathcal{J}_5^{12} & \mathcal{J}_6^{12} \\\ \mathcal{J}_1^{13} & \mathcal{J}_2^{13} & \mathcal{J}_3^{13} & \mathcal{J}_4^{13} & \mathcal{J}_5^{13} & \mathcal{J}_6^{13} \\\ \mathcal{J}_1^{14} & \mathcal{J}_2^{14} & \mathcal{J}_3^{14} & \mathcal{J}_4^{14} & \mathcal{J}_5^{14} & \mathcal{J}_6^{14} \\\ \mathcal{J}_1^{23} & \mathcal{J}_2^{23} & \mathcal{J}_3^{23} & \mathcal{J}_4^{23} & \mathcal{J}_5^{23} & \mathcal{J}_6^{23} \\\ \mathcal{J}_1^{24} & \mathcal{J}_2^{24} & \mathcal{J}_3^{24} & \mathcal{J}_4^{24} & \mathcal{J}_5^{24} & \mathcal{J}_6^{24} \\\ \mathcal{J}_1^{34} & \mathcal{J}_2^{34} & \mathcal{J}_3^{34} & \mathcal{J}_4^{34} & \mathcal{J}_5^{34} & \mathcal{J}_6^{34} \end{bmatrix} \begin{bmatrix} A_1 \\\ A_2 \\\ A_3 \\\ A_4 \\\ A_5 \\\ A_6 \end{bmatrix}\tag{5}$$

This implies that the vector is ##6##-dimensional in the adjoint representation. Thus, ##T^{ij}## and ##A_a## are the same thing expressed in different bases.

Am I correct that this is what is meant?
No. I believe you are confused here. The defining representation acts on the vector space ##\mathbb{R}^4## via matrix multiplication.
$$ \begin{align*}
v' &= T v\\
T &= \begin{bmatrix} 0 & T^{12} & T^{13} & T^{14} \\\ x & 0 & T^{23} & T^{24} \\\ x & x & 0 & T^{34} \\\ x & x & x & 0 \end{bmatrix}\tag{3}
\end{align*}
$$
Whereas the adjoint representation acts on the vector space ##\mathfrak{so}(n)## which isomporhic to ##\mathbb{R}^6## like you wrote. In other words, the adjoint representation acts on itself whereas the defining rep doesn't. It is correct that the ##T_{ij} ## are the same as the ##A_a##, but in the defining rep, ##T_{ij}## is the matrix that acts on a vector in ##\mathbb{R}^4## whereas the ##A_a## is the vector that is acted on by the adjoint representation.
shinobi20 said:
Question 4. Regardless of whether the construction in question 3 is correct or not, there seems to be something wrong with how eq.##(1)## is done for ##N=4##. In eq.##(1)##, ##\mathcal{J}_a^{ij}## has indices ##a## which runs until ##\frac{1}{2}N(N-1)## while ##(ij)## runs until ##N##. To interpret the ##6##-dimensional matrix in question 3 as ##(\mathcal{J}_a)_{bc} = f_{abc}## where ##(bc)## are the indices that indicate the matrix components and acts like ##(ij)## in eq.##(1)##, there would be an issue since ##(bc)## should run until ##\frac{1}{2}N(N-1)##. Of course, for ##N=3## there's no issue since ##N=\frac{1}{2}N(N-1)## but that's a coincidence. So it seems like the matrix construction in question 3 may be wrong?
No it is correct. Some of the values of your indices are 0 in the ##T_{ij}##. It is probably more naturally to write the following, ##T_{ij} = -T_{ji}##, which makes it much clearer that the ##T_{ii}## components are 0 and that you only need 6 basis matrices for N = 4.

Lastly, I might advise to look at a math text like Hall's on representations because the physics notation for these things is a bit of a mess and making it more complicated then need be, IMO.
 
  • #9
That's funny. I would consider a math textbook on this stuff like Hall instead of Zee (which I both used) "making it more complicated then need be" to a high degree. But then again, I think I'm probably too dumb to understand why mathematicians sometimes are so frantic about physicist's language about math. Or maybe I'm just happy.
 
  • #10
haushofer said:
That's funny. I would consider a math textbook on this stuff like Hall instead of Zee (which I both used) "making it more complicated then need be" to a high degree. But then again, I'm probably too dumb to understand why mathematicians sometimes are so frantic about physicist's language about math. I think I'm dumb. Or maybe I'm just happy.
To each his own, I guess. I find all the indexes a bit of unhelpful clutter, but I'm slightly dyslexic, I think. Describing the adjoint representation in terms of the structure constants seems unnecessarily complicated to me when it is much simpler to describe it in terms of the lie bracket.
 
  • Like
Likes fresh_42
  • #11
jbergman said:
To each his own, I guess. I find all the indexes a bit of unhelpful clutter, but I'm slightly dyslexic, I think. Describing the adjoint representation in terms of the structure constants seems unnecessarily complicated to me when it is much simpler to describe it in terms of the lie bracket.
I agree that the adjoint action looks more natural. But you also want to calculate stuff explicitly every now and then :P
 
  • #12
haushofer said:
I agree that the adjoint action looks more natural. But you also want to calculate stuff explicitly every now and then :P
But wouldn't that be quite useless in this case? I mean ##\mathfrak{so}(n)\cong \mathfrak{ad}(\mathfrak{so}(n)),## done.
 
  • #13
fresh_42 said:
But wouldn't that be quite useless in this case? I mean ##\mathfrak{so}(n)\cong \mathfrak{ad}(\mathfrak{so}(n)),## done.
Oh, yes, I meant in general.
 
  • #14
haushofer said:
Oh, yes, I meant in general.
I know what you mean. I have calculated dozens of representations on ##\{\alpha\, : \,[\alpha(X),Y]+[X,\alpha(Y)]=0\}## (for non-semisimple ones) just to find patterns. I have found patterns, but they didn't tell me anything. So calculating examples doesn't necessarily enlighten you.
 
  • #15
Actually, looking at your questions I see some problems that I didn't notice at a glance.

First, I think we need to be a bit more precise. A lie algebra is a vector space along with a lie bracket. A lie algebra rep is a linear function from one lie algebra to another that respects the lie bracket (a lie algebra homomorphism).

So, if we start with the lie algebra ##\mathfrak{so(4)}## or the set of anti-symmetric matrices ##T^{ij} = -T^{ij}##. Side note, but I believe that should be ##T^i_j##. Then the adjoint rep and the fundamental rep are two different functions on these anti-symmetric matrices. I will call these ##ad## and ##f## respectively.

Then ##f## is just the identity map.

$$f(T^{ij})=T^{ij}$$ or in terms of your generators,
$$f\left(\sum_{a=1}^{6} A_a \mathcal{J}_a^{ij}\right)=\sum_{a=1}^{6} A_a \mathcal{J}_a^{ij}$$

Now the adjoint is slightly different. If you are familiar with lie brackets then ##ad(X)(Y) = [X,Y] = XY - YX## so ##ad(X) = [X, \_]##. If we have our generators we find that ##ad(\mathcal{J}_a)_{bc}=(\mathcal{J}_a)_{bc}##. This means that,

$$
ad(T^{ij}) = ad\left(\sum_{a=1}^{6} A_a \mathcal{J}_a^{ij}\right) =
\begin{bmatrix} A_1\mathcal{J}_1^{12} & A_2\mathcal{J}_2^{12} & A_3\mathcal{J}_3^{12} & A_4\mathcal{J}_4^{12} & A_5\mathcal{J}_5^{12} & A_6\mathcal{J}_6^{12} \\\
A_1\mathcal{J}_1^{13} & A_2\mathcal{J}_2^{13} & A_3\mathcal{J}_3^{13} & A_4\mathcal{J}_4^{13} & A_5\mathcal{J}_5^{13} & A_6\mathcal{J}_6^{13} \\\
A_1\mathcal{J}_1^{14} & A_2\mathcal{J}_2^{14} & A_3\mathcal{J}_3^{14} & A_4\mathcal{J}_4^{14} & A_5\mathcal{J}_5^{14} & A_6\mathcal{J}_6^{14} \\\
A_1\mathcal{J}_1^{23} & A_2\mathcal{J}_2^{23} & A_3\mathcal{J}_3^{23} & A_4\mathcal{J}_4^{23} & A_5\mathcal{J}_5^{23} & A_6\mathcal{J}_6^{23} \\\
A_1\mathcal{J}_1^{24} & A_2\mathcal{J}_2^{24} & A_3\mathcal{J}_3^{24} & A_4\mathcal{J}_4^{24} & A_5\mathcal{J}_5^{24} & A_6\mathcal{J}_6^{24} \\\
A_1\mathcal{J}_1^{34} & A_2\mathcal{J}_2^{34} & A_3\mathcal{J}_3^{34} & A_4\mathcal{J}_4^{34} & A_5\mathcal{J}_5^{34} & A_6\mathcal{J}_6^{34} \end{bmatrix}
$$

The point is that the ##A_a## end up as part of the coefficients of the matrix in the adjoint representation.
 
  • #16
fresh_42 said:
I think mathematicians would call it "natural or just matrix representation". But that wasn't the issue. Is it about
$$
\operatorname{SO}(n) \stackrel{\text{matrix}}{\longrightarrow} \operatorname{GL}(n,\mathbb{R})
$$
versus
$$
\operatorname{SO}(n) \stackrel{\operatorname{Ad}}{\longrightarrow} \operatorname{GL}(\mathfrak{so}(n))
$$
or about
$$
\mathfrak{so}(n) \stackrel{\text{matrix}}{\longrightarrow} \mathfrak{gl}(n,\mathbb{R})
$$
versus
$$
\mathfrak{so}(n) \stackrel{\mathfrak{ad}}{\longrightarrow} \mathfrak{gl}(\mathfrak{so}(n))
$$
and where are the tensors?
I believe it must be the latter, but physics books make it look/feel like it's the first one.
fresh_42 said:
But wouldn't that be quite useless in this case? I mean ##\mathfrak{so}(n)\cong \mathfrak{ad}(\mathfrak{so}(n)),## done.
Can you elaborate ##\mathfrak{so}(n)\cong \mathfrak{ad}(\mathfrak{so}(n))##? I thought as in post #8, ##\mathfrak{so}(6)\cong \mathbb{R}^6##, i.e., the adjoint rep acts on the vector space ##\mathfrak{so}(6)## which is isomorphic to the 6-dimensional vector space ##\mathbb{R}^6##.
 
  • #17
jbergman said:
Actually, looking at your questions I see some problems that I didn't notice at a glance.

First, I think we need to be a bit more precise. A lie algebra is a vector space along with a lie bracket. A lie algebra rep is a linear function from one lie algebra to another that respects the lie bracket (a lie algebra homomorphism).

So, if we start with the lie algebra ##\mathfrak{so(4)}## or the set of anti-symmetric matrices ##T^{ij} = -T^{ij}##. Side note, but I believe that should be ##T^i_j##. Then the adjoint rep and the fundamental rep are two different functions on these anti-symmetric matrices. I will call these ##ad## and ##f## respectively.

Then ##f## is just the identity map.

$$f(T^{ij})=T^{ij}$$ or in terms of your generators,
$$f\left(\sum_{a=1}^{6} A_a \mathcal{J}_a^{ij}\right)=\sum_{a=1}^{6} A_a \mathcal{J}_a^{ij}$$

Now the adjoint is slightly different. If you are familiar with lie brackets then ##ad(X)(Y) = [X,Y] = XY - YX## so ##ad(X) = [X, \_]##. If we have our generators we find that ##ad(\mathcal{J}_a)_{bc}=(\mathcal{J}_a)_{bc}##. This means that,

$$
ad(T^{ij}) = ad\left(\sum_{a=1}^{6} A_a \mathcal{J}_a^{ij}\right) =
\begin{bmatrix} A_1\mathcal{J}_1^{12} & A_2\mathcal{J}_2^{12} & A_3\mathcal{J}_3^{12} & A_4\mathcal{J}_4^{12} & A_5\mathcal{J}_5^{12} & A_6\mathcal{J}_6^{12} \\\
A_1\mathcal{J}_1^{13} & A_2\mathcal{J}_2^{13} & A_3\mathcal{J}_3^{13} & A_4\mathcal{J}_4^{13} & A_5\mathcal{J}_5^{13} & A_6\mathcal{J}_6^{13} \\\
A_1\mathcal{J}_1^{14} & A_2\mathcal{J}_2^{14} & A_3\mathcal{J}_3^{14} & A_4\mathcal{J}_4^{14} & A_5\mathcal{J}_5^{14} & A_6\mathcal{J}_6^{14} \\\
A_1\mathcal{J}_1^{23} & A_2\mathcal{J}_2^{23} & A_3\mathcal{J}_3^{23} & A_4\mathcal{J}_4^{23} & A_5\mathcal{J}_5^{23} & A_6\mathcal{J}_6^{23} \\\
A_1\mathcal{J}_1^{24} & A_2\mathcal{J}_2^{24} & A_3\mathcal{J}_3^{24} & A_4\mathcal{J}_4^{24} & A_5\mathcal{J}_5^{24} & A_6\mathcal{J}_6^{24} \\\
A_1\mathcal{J}_1^{34} & A_2\mathcal{J}_2^{34} & A_3\mathcal{J}_3^{34} & A_4\mathcal{J}_4^{34} & A_5\mathcal{J}_5^{34} & A_6\mathcal{J}_6^{34} \end{bmatrix}
$$

The point is that the ##A_a## end up as part of the coefficients of the matrix in the adjoint representation.
The way you wrote the fundamental rep and the adjoint rep, there will be no difference in their dimensions, i.e., both are 6-dimensional, but fundamental rep should be n-dimensional, i.e., n=4.
 
  • #18
shinobi20 said:
The way you wrote the fundamental rep and the adjoint rep, there will be no difference in their dimensions, i.e., both are 6-dimensional, but fundamental rep should be n-dimensional, i.e., n=4.
Fort there was an error in how I wrote the matrix for ##ad##. I will post a correction later working with the structure constants as a matrix is a bit of a pain.

Second to answer your question. No, they aren't the same. ##f(T^{ij})## is a 4x4 matrix that acts on ##R^4##. ##ad(T^{ij})## is a 6x6 matrix that acts on ##R^6##. Are you clear that ##T^{ij}## is a 4x4 matrix?
 
  • Like
Likes shinobi20
  • #19
shinobi20 said:
I believe it must be the latter, but physics books make it look/feel like it's the first one.

Can you elaborate ##\mathfrak{so}(n)\cong \mathfrak{ad}(\mathfrak{so}(n))##? I thought as in post #8, ##\mathfrak{so}(6)\cong \mathbb{R}^6##, i.e., the adjoint rep acts on the vector space ##\mathfrak{so}(6)## which is isomorphic to the 6-dimensional vector space ##\mathbb{R}^6##.
I do not want to get into the way of @jbergman to teach you. ##\operatorname{dim}\mathfrak{so}(6)=\dfrac{6\cdot(6-1)}{2}=15 > 6## so it cannot be isomorphic to ##\mathbb{R}^6.## ##\mathfrak{so}(4)## is six-dimensional, so it is isomorphic to ##\mathbb{R}^6## but only as vector spaces, not as Lie algebras. ##\mathbb{R}^6## is usually not considered to be a Lie algebra. The identification with a subalgebra of a general linear Lie algebra ##\mathfrak{gl}(\mathbb{R}^n)## would be the only way to do it anyway, so why complicate things and not work with this subalgebra, e.g. ##\mathfrak{so}(4)## in the first place? Will say: forget such isomorphisms! That's why I wrote ##\mathfrak{gl}(\mathfrak{so}(4))## instead of ##\mathfrak{gl}(\mathbb{R}^6)## which is the same matrix algebra.

The adjoint representation of Lie algebras is in our case a mapping
$$
\begin{matrix}
\mathfrak{ad}\, &: \,&\mathfrak{so}(n) &\longrightarrow &\mathfrak{gl}(\mathfrak{so}(n))\\
&&X&\longmapsto &(Y\longmapsto [X,Y])
\end{matrix}
$$
Its kernel is the center of the Lie algebra. The center is an ideal. However, ##\mathfrak{so}(n)## is simple, i.e. it has only trivial ideals, i.e. the center of the Lie algebra aka the kernel of ##\mathfrak{ad}## is zero since ##\mathfrak{ad}## is not. Finally, we get by the isomorphism theorem
$$
\mathfrak{so}(n) \cong \mathfrak{ad}(\mathfrak{so}(n))/\operatorname{ker}(\mathfrak{ad})=\mathfrak{ad}(\mathfrak{so}(n))/\{0\}=\mathfrak{ad}(\mathfrak{so}(n)).
$$
 
Last edited:
  • #20
Ok let's more carefully work through this. First we have the following generators/basis matrices, ##X_a## for ##\mathfrak{so(4)}##.

$$
\begin{align*}
X_1 &= \begin{bmatrix}
0 & 1 & 0 & 0 \\\
-1 & 0 & 0 & 0 \\\
0 & 0 & 0 & 0 \\\
0 & 0 & 0 & 0
\end{bmatrix} \\
X_2 &= \begin{bmatrix}
0 & 0 & 1 & 0 \\\
0 & 0 & 0 & 0 \\\
-1 & 0 & 0 & 0 \\\
0 & 0 & 0 & 0
\end{bmatrix} \\
X_3 &= \begin{bmatrix}
0 & 0 & 0 & 1 \\\
0 & 0 & 0 & 0 \\\
0 & 0 & 0 & 0 \\\
-1 & 0 & 0 & 0
\end{bmatrix} \\
X_4 &= \begin{bmatrix}
0 & 0 & 0 & 0 \\\
0 & 0 & 1 & 0 \\\
0 & -1 & 0 & 0 \\\
0 & 0 & 0 & 0
\end{bmatrix} \\
X_5 &= \begin{bmatrix}
0 & 0 & 0 & 0 \\\
0 & 0 & 0 & 1 \\\
0 & 0 & 0 & 0 \\\
0 & -1 & 0 & 0
\end{bmatrix} \\
X_6 &= \begin{bmatrix}
0 & 0 & 0 & 0 \\\
0 & 0 & 0 & 0 \\\
0 & 0 & 0 & 1 \\\
0 & 0 & -1 & 0
\end{bmatrix}
\end{align*}
$$

Now we need to calculate the structure constants ##[X_a, X_b] = f_{abc}X_c##. Since the bracket is antisymmetric we know that ##a=b \rightarrow f_{abc} = 0## and that ##f_{abc} = -f_{bac}##. So we only need to look at the 6 choose 2 pairs of brackets to get all of the structure constants. And we find that only the following are non-zero

$$
\begin{align*}
[X_1, X_2] &= -X_4 \\
[X_1, X_3]&= -X_5\\
[X_1, X_4]&= X_2\\
[X_1, X_5]&= X_3\\
[X_2, X_3]&= -X_6\\
[X_2, X_4]&= -X_1\\
[X_2, X_6]&= X_3\\
[X_3, X_5]&= -X_1\\
[X_3, X_6]&= -X_2\\
[X_4, X_5]&= -X_6\\
[X_4, X_6]&= X_5\\
[X_5, X_6]&= -X_4
\end{align*}
$$

Which means we have the following non-zero structure constants (and their antisymmetric in the first pairs),
$$
\begin{align*}
&f_{124} = f_{135} = f_{236}= f_{241} = f_{351} = f_{362} = f_{456} = f_{564} = -1 \\
&f_{142} = f_{153} = f_{263} = f_{465} = 1
\end{align*}
$$

Now we can define our representations. As I stated before for the fundamental rep (not the same f as our structure constants), f(X_a) = X_a, for example
$$
f(X_1) = X_1 =
\begin{bmatrix}
0 & 1 & 0 & 0 \\\
-1 & 0 & 0 & 0 \\\
0 & 0 & 0 & 0 \\\
0 & 0 & 0 & 0
\end{bmatrix}
$$

This extends by linearity of ##f## to an arbitrary linear combination of our basis.

For the adjoint represent, we have that ##[ad(X_a)]_b^c = f_{abc}##. Again for an example let's look at ##ad(X_1)##.

$$
ad(X_1) = f_{1bc} =
\begin{bmatrix}
f_{111} & f_{121} & f_{131} & f_{141} & f_{151} & f_{161} \\\
f_{112} & f_{122} & f_{132} & f_{142} & f_{152} & f_{162} \\\
f_{113} & f_{123} & f_{133} & f_{143} & f_{153} & f_{163} \\\
f_{114} & f_{124} & f_{134} & f_{144} & f_{154} & f_{164} \\\
f_{115} & f_{125} & f_{135} & f_{145} & f_{155} & f_{165} \\\
f_{116} & f_{126} & f_{136} & f_{146} & f_{156} & f_{166}
\end{bmatrix} =
\begin{bmatrix}
0 & 0 & 0 & 0 & 0 & 0 \\\
0 & 0 & 0 & 1 & 0 & 0 \\\
0 & 0 & 0 & 0 & 1 & 0 \\\
0 & -1 & 0 & 0 & 0 & 0 \\\
0 & 0 & -1 & 0 & 0 & 0 \\\
0 & 0 & 0 & 0 & 0 & 0
\end{bmatrix}
$$

One thing to note, ##\mathfrak{so(4)} \cong \mathfrak{so(3)} \oplus \mathfrak{so(3)}##, i.e. it is the direct sum of to copies of ##\mathfrak{so(3)}##, so the answer isn't surprising. Hope that helps.
 
  • #21
jbergman said:
Ok let's more carefully work through this. First we have the following generators/basis matrices, ##X_a## for ##\mathfrak{so(4)}##.

$$
\begin{align*}
X_1 &= \begin{bmatrix}
0 & 1 & 0 & 0 \\\
-1 & 0 & 0 & 0 \\\
0 & 0 & 0 & 0 \\\
0 & 0 & 0 & 0
\end{bmatrix} \\
X_2 &= \begin{bmatrix}
0 & 0 & 1 & 0 \\\
0 & 0 & 0 & 0 \\\
-1 & 0 & 0 & 0 \\\
0 & 0 & 0 & 0
\end{bmatrix} \\
X_3 &= \begin{bmatrix}
0 & 0 & 0 & 1 \\\
0 & 0 & 0 & 0 \\\
0 & 0 & 0 & 0 \\\
-1 & 0 & 0 & 0
\end{bmatrix} \\
X_4 &= \begin{bmatrix}
0 & 0 & 0 & 0 \\\
0 & 0 & 1 & 0 \\\
0 & -1 & 0 & 0 \\\
0 & 0 & 0 & 0
\end{bmatrix} \\
X_5 &= \begin{bmatrix}
0 & 0 & 0 & 0 \\\
0 & 0 & 0 & 1 \\\
0 & 0 & 0 & 0 \\\
0 & -1 & 0 & 0
\end{bmatrix} \\
X_6 &= \begin{bmatrix}
0 & 0 & 0 & 0 \\\
0 & 0 & 0 & 0 \\\
0 & 0 & 0 & 1 \\\
0 & 0 & -1 & 0
\end{bmatrix}
\end{align*}
$$

Now we need to calculate the structure constants ##[X_a, X_b] = f_{abc}X_c##. Since the bracket is antisymmetric we know that ##a=b \rightarrow f_{abc} = 0## and that ##f_{abc} = -f_{bac}##. So we only need to look at the 6 choose 2 pairs of brackets to get all of the structure constants. And we find that only the following are non-zero

$$
\begin{align*}
[X_1, X_2] &= -X_4 \\
[X_1, X_3]&= -X_5\\
[X_1, X_4]&= X_2\\
[X_1, X_5]&= X_3\\
[X_2, X_3]&= -X_6\\
[X_2, X_4]&= -X_1\\
[X_2, X_6]&= X_3\\
[X_3, X_5]&= -X_1\\
[X_3, X_6]&= -X_2\\
[X_4, X_5]&= -X_6\\
[X_4, X_6]&= X_5\\
[X_5, X_6]&= -X_4
\end{align*}
$$

Which means we have the following non-zero structure constants (and their antisymmetric in the first pairs),
$$
\begin{align*}
&f_{124} = f_{135} = f_{236}= f_{241} = f_{351} = f_{362} = f_{456} = f_{564} = -1 \\
&f_{142} = f_{153} = f_{263} = f_{465} = 1
\end{align*}
$$

Now we can define our representations. As I stated before for the fundamental rep (not the same f as our structure constants), f(X_a) = X_a, for example
$$
f(X_1) = X_1 =
\begin{bmatrix}
0 & 1 & 0 & 0 \\\
-1 & 0 & 0 & 0 \\\
0 & 0 & 0 & 0 \\\
0 & 0 & 0 & 0
\end{bmatrix}
$$

This extends by linearity of ##f## to an arbitrary linear combination of our basis.

For the adjoint represent, we have that ##[ad(X_a)]_b^c = f_{abc}##. Again for an example let's look at ##ad(X_1)##.

$$
ad(X_1) = f_{1bc} =
\begin{bmatrix}
f_{111} & f_{121} & f_{131} & f_{141} & f_{151} & f_{161} \\\
f_{112} & f_{122} & f_{132} & f_{142} & f_{152} & f_{162} \\\
f_{113} & f_{123} & f_{133} & f_{143} & f_{153} & f_{163} \\\
f_{114} & f_{124} & f_{134} & f_{144} & f_{154} & f_{164} \\\
f_{115} & f_{125} & f_{135} & f_{145} & f_{155} & f_{165} \\\
f_{116} & f_{126} & f_{136} & f_{146} & f_{156} & f_{166}
\end{bmatrix} =
\begin{bmatrix}
0 & 0 & 0 & 0 & 0 & 0 \\\
0 & 0 & 0 & 1 & 0 & 0 \\\
0 & 0 & 0 & 0 & 1 & 0 \\\
0 & -1 & 0 & 0 & 0 & 0 \\\
0 & 0 & -1 & 0 & 0 & 0 \\\
0 & 0 & 0 & 0 & 0 & 0
\end{bmatrix}
$$

One thing to note, ##\mathfrak{so(4)} \cong \mathfrak{so(3)} \oplus \mathfrak{so(3)}##, i.e. it is the direct sum of to copies of ##\mathfrak{so(3)}##, so the answer isn't surprising. Hope that helps.
I see, great response. So basically, we can construct the fundamental representation of the ##\mathfrak{so(4)}## algebra which is 4-dimensional, on the other hand, we can also construct an alternate representation called the adjoint representation which is 6-dimensional.

However, I still do not see the connection of what Zee wrote,

$$T^{ij} = \sum_{a=1}^{\frac{1}{2}N(N-1)} A_a \mathcal{J}_a^{ij}$$

to the adjoint representation. Specifically, what's his point in writing this down? It looks to me it's just a change of basis of the tensor components, but I don't see it's connection with the adjoint representation.

To construct the adjoint representation we just need to form the Lie algebra and see what is its structure constant, then by identifying the generator ##\mathcal{J}_a## with the structure constant ##f_{abc}## we can build the matrices from there, i.e. ##(\mathcal{J}_a)_{bc} = f_{abc}##.
 
  • #22
shinobi20 said:
I see, great response. So basically, we can construct the fundamental representation of the ##\mathfrak{so(4)}## algebra which is 4-dimensional, on the other hand, we can also construct an alternate representation called the adjoint representation which is 6-dimensional.

However, I still do not see the connection of what Zee wrote,

$$T^{ij} = \sum_{a=1}^{\frac{1}{2}N(N-1)} A_a \mathcal{J}_a^{ij}$$

to the adjoint representation. Specifically, what's his point in writing this down? It looks to me it's just a change of basis of the tensor components, but I don't see it's connection with the adjoint representation.

To construct the adjoint representation we just need to form the Lie algebra and see what is its structure constant, then by identifying the generator ##\mathcal{J}_a## with the structure constant ##f_{abc}## we can build the matrices from there, i.e. ##(\mathcal{J}_a)_{bc} = f_{abc}##.
I don't have a copy so I can't answer definitively what Zee was referring to. Do you have a screenshot? We do have 6 generators for ##\mathfrak{so(6)}##. So, like was said before we can think of an arbitrary element of it as a linear combination of these generators, and in turn as was stated before we can think of the adjoint representation as acting on ##\mathfrak{so(6)}## since it is isomorphic to ##\mathbb{R}^6##. I am assuming it is related to these facts.
 
  • #23
shinobi20 said:
I still do not see the connection of what Zee wrote,

[tex]T^{ij} = \sum_{a=1}^{\frac{1}{2}N(N-1)} A_a \mathcal{J}_a^{ij}[/tex]

to the adjoint representation.
Exercise: Consider the transformation of your tensor [itex]T[/itex] under the rotation [itex]R \in SO(n)[/itex]: [tex]T \to \bar{T} = RTR^{-1},[/tex] Now consider the infinitesimal version of the transformation by writing [tex]R = 1 + \omega \cdot \mathcal{J},[/tex] where [itex]\omega \cdot \mathcal{J} = \omega^{a}\mathcal{J}_{a}[/itex], [itex]a = 1, \cdots \ , 1/2 n(n -1)[/itex] is the adjoint index, and [itex]\mathcal{J}_{a}[/itex] are the generators (basis) of the vector ( = fundamental = defining) representation.

Show that the infinitesimal change in the tensor [itex]T[/itex] is given by: [tex]\delta T = [\omega \cdot \mathcal{J} , T].[/tex] Next, write [itex]T = A \cdot \mathcal{J} \equiv A^{b}\mathcal{J}_{b}[/itex] and show that the [itex]A[/itex] transforms as a vector in the adjoint representation [tex]\delta A^{b} = \left(\omega \cdot \mathcal{J} \right)^{b}{}_{c} A^{c}.[/tex] Use [itex]\left(\mathcal{J}_{a}\right)^{b}{}_{c} = C_{abc}[/itex], and [itex][\mathcal{J}_{a} , \mathcal{J}_{b}] = - C_{abc} \mathcal{J}_{c}[/itex].
 
  • Like
Likes dextercioby and jbergman
  • #24
jbergman said:
I don't have a copy so I can't answer definitively what Zee was referring to. Do you have a screenshot? We do have 6 generators for ##\mathfrak{so(6)}##. So, like was said before we can think of an arbitrary element of it as a linear combination of these generators, and in turn as was stated before we can think of the adjoint representation as acting on ##\mathfrak{so(6)}## since it is isomorphic to ##\mathbb{R}^6##. I am assuming it is related to these facts.
Image_Page_1.png
Image_Page_2.png
Image_Page_3.png
Image_Page_4.png

Some errata:
p. 200: (1) each of which is a ##N \times N## matrix. (2) ##T' \approx (I + \theta_a \mathcal{J}_a) T (I - \theta_a \mathcal{J}_a)## (3) ##\delta T = \theta_a [\mathcal{J}_a, T] = \theta_a [\mathcal{J}_a, A_b \mathcal{J}_b] = \theta_a A_b [\mathcal{J}_a, \mathcal{J}_b]## (4) ##[\mathcal{J}_a, \mathcal{J}_b] = -f_{abc} \mathcal{J}_c##
p. 201: (1) ##\delta T = -\theta_a A_b f_{abc} \mathcal{J}_c## (2) ##\delta A_c = -f_{abc} \theta_a A_b##
 
Last edited:
  • #25
samalkhaiat said:
Exercise: Consider the transformation of your tensor [itex]T[/itex] under the rotation [itex]R \in SO(n)[/itex]: [tex]T \to \bar{T} = RTR^{-1},[/tex] Now consider the infinitesimal version of the transformation by writing [tex]R = 1 + \omega \cdot \mathcal{J},[/tex] where [itex]\omega \cdot \mathcal{J} = \omega^{a}\mathcal{J}_{a}[/itex], [itex]a = 1, \cdots \ , 1/2 n(n -1)[/itex] is the adjoint index, and [itex]\mathcal{J}_{a}[/itex] are the generators (basis) of the vector ( = fundamental = defining) representation.

Show that the infinitesimal change in the tensor [itex]T[/itex] is given by: [tex]\delta T = [\omega \cdot \mathcal{J} , T].[/tex] Next, write [itex]T = A \cdot \mathcal{J} \equiv A^{b}\mathcal{J}_{b}[/itex] and show that the [itex]A[/itex] transforms as a vector in the adjoint representation [tex]\delta A^{b} = \left(\omega \cdot \mathcal{J} \right)^{b}{}_{c} A^{c}.[/tex] Use [itex]\left(\mathcal{J}_{a}\right)^{b}{}_{c} = C_{abc}[/itex], and [itex][\mathcal{J}_{a} , \mathcal{J}_{b}] = - C_{abc} \mathcal{J}_{c}[/itex].
I'm sorry, I understand the calculation from the first step to the last step, but what I do not get is the main point of this and the big picture.

Let me summarize what I do understand.
  • Scalar, Vector (Fundamental), and Tensor Representation - given a Lie group ##SO(N)##, the scalar furnishes a ##1##-dimensional representation, the vector furnishes an ##N##-dimensional (fundamental) representation, the tensor furnishes an ##N^2##-dimensional representation. However, the tensor representation is reducible, decomposing to the trace, antisymmetric, and symmetric-traceless components. For ##SO(3)##, the tensor representation is ##9##-dimensional and decomposes into a ##1##-dimensional trace, ##3##-dimensional antisymmetric, and ##5##-dimensional symmetric-traceless part.
  • Adjoint representation - given a Lie algebra ##\mathbb{so}(N)##, we can take the commutator of its generators/elements ##\mathcal{J}_a## such that ##[\mathcal{J}_a, \mathcal{J}_b] = - f_{abc} \mathcal{J}_c## where ##f_{abc}## is the structure constant. With the knowledge of the structure constant existing in the commutator, we find that it behaves like the generator in terms of its matrix construction such that ##(\mathcal{J}_a)_{bc} = f_{abc}##. In this sense, the structure constant furnishes a representation of the Lie algebra called the adjoint representation.
This is all good, we talk about representations of the Lie group on one hand, and we talk about representations of Lie algebras on the other hand.

Now, what confuses me is the part where Zee introduces the "punchline" where we can think of the antisymmetric tensors as ##N \times N## matrices and wrote ##T^{ij} = \sum_{a=1}^{\frac{1}{2}N(N-1)} A_a \mathcal{J}_a^{ij}##. I understand that we can think of ##T^{ij}## that way and express it in the basis ##\mathcal{J}_a^{ij}##. The question is, why do this? What's the end goal? What's the main point? My guess is Zee is trying to connect representations of Lie algebras to Lie groups.

I understand your calculations, in the end I should get ##\delta A_c = -f_{abc} \theta_a A_b##, but so what? What's the point? My guess is, we get how ##A_a## transforms and I think since ##f_{abc}## is present in that equation Zee claims that ##A_a## furnishes the adjoint representation. I need further explanation here.
 
  • #26
shinobi20 said:
View attachment 341544View attachment 341545View attachment 341546View attachment 341547
Some errata:
p. 200: (1) each of which is a ##N \times N## matrix. (2) ##T' \approx (I + \theta_a \mathcal{J}_a) T (I - \theta_a \mathcal{J}_a)## (3) ##\delta T = \theta_a [\mathcal{J}_a, T] = \theta_a [\mathcal{J}_a, A_b \mathcal{J}_b] = \theta_a A_b [\mathcal{J}_a, \mathcal{J}_b]## (4) ##[\mathcal{J}_a, \mathcal{J}_b] = -f_{abc} \mathcal{J}_c##
p. 201: (1) ##\delta T = -\theta_a A_b f_{abc} \mathcal{J}_c## (2) ##\delta A_c = -f_{abc} \theta_a A_b##
I read through those pages. He is essentially saying what I said previously. You can view elements of the Lie algebra ##\mathfrak{so(n)}## as either antisymmetric tensors ##T^{ij}## (or equivalently skew symmetric matrices) or as vectors ##A_a##. This is nothing more than the statement that the vector space of antisymmetric rank 2 tensors over an ##n##-dimensional vector space is a ##\frac{1}{2}N(N-1)##-dimensional vector space. So we can think of ##T^{ij}## as a vector of length ##\frac{1}{2}N(N-1)##. The adjoint representation acts on vectors of length ##\frac{1}{2}N(N-1)## and it is equivalently to just acting via the bracket on a ##T^{ij}## ala ##[T^{lm}, T^{ij}]##. That's all he is saying.

Maybe this explanation helps because I don't know what else to say, https://physics.stackexchange.com/q...e-adjoint-of-son-section-in-zees-group-theory
 
  • #27
shinobi20 said:
The question is, why do this? What's the end goal? What's the main point?
I don’t know, I don’t use Zee’s books. May be he wants to introduce you to the adjoint representation, if he has not done that by this point.
Note that “similar” thing can be done in the [itex]SU(n)[/itex] group by expanding the traceless [itex](1,1)[/itex] tensor representation in terms of the hermitian generators in the defining representation: [tex]\psi^{i}{}_{j} = 2 \varphi^{a} (F_{a})^{i}{}_{j}, \ \ \ \ \ \ \ (1)[/tex] and, again you can show that [itex]\varphi^{a}[/itex] is an [itex]n^{2} - 1[/itex] component vector transforming in the adjoint representation. Conversely, any adjoint vector [itex]\varphi^{a}[/itex] can be written in terms of the traceless [itex](1,1)[/itex] as [tex]\varphi^{a} = \mbox{Tr}(\psi F_{a}) = \psi^{i}{}_{j}(F_{a})^{j}{}_{i} , \ \ \ \ \ \ \ \ (2)[/tex] if we normalize the generators according [tex]\mbox{Tr}(F_{a}F_{b}) = \frac{1}{2} \delta_{ab} .[/tex]

The non-trivial task, which I leave as an exercise for you, is to invert Eq(2) and obtain Eq(1) from it.
 
  • #28
jbergman said:
I read through those pages. He is essentially saying what I said previously. You can view elements of the Lie algebra ##\mathfrak{so(n)}## as either antisymmetric tensors ##T^{ij}## (or equivalently skew symmetric matrices) or as vectors ##A_a##. This is nothing more than the statement that the vector space of antisymmetric rank 2 tensors over an ##n##-dimensional vector space is a ##\frac{1}{2}N(N-1)##-dimensional vector space. So we can think of ##T^{ij}## as a vector of length ##\frac{1}{2}N(N-1)##. The adjoint representation acts on vectors of length ##\frac{1}{2}N(N-1)## and it is equivalently to just acting via the bracket on a ##T^{ij}## ala ##[T^{lm}, T^{ij}]##. That's all he is saying.

Maybe this explanation helps because I don't know what else to say, https://physics.stackexchange.com/q...e-adjoint-of-son-section-in-zees-group-theory
I see, your response clarified even further some concepts. I'll summarize again what I understood up until now. Please correct me if I'm wrong.

Elements of the Lie group are not the same as elements of the Lie algebra. So, the mechanism of representations for Lie groups and Lie algebras is not necessarily the same, although there are some parallels. Their representations may be related through the matrix exponential.

Lie Groups
The main point of Lie group representations is that we can furnish different dimensions of representation for a given Lie group, i.e. scalar, vector, and tensor.

Given a Lie group ##SO(N)##, the scalar furnishes a ##1##-dimensional representation, the vector ##V^i## furnishes an ##N##-dimensional (fundamental) representation, and the tensor ##T^{ij}## furnishes an ##N^2##-dimensional representation. However, the tensor representation is reducible, decomposing to the ##1##-dimensional trace ##S##, ##\frac{1}{2}N(N-1)## antisymmetric ##A^{ij}##, and ##\frac{1}{2}N(N+1)-1## symmetric-traceless ##S^{ij}## components.

In addition, for the special case of ##SO(3)##, we can construct higher dimensional representations by studying higher rank symmetric-traceless tensors, antisymmetric tensors do not give any new higher dimensional representation because we can contract it with the levi-civita symbol and obtain something equivalent to the lower dimensional representation already studied like the vector representation, this is discussed by Zee. This is all we need to know about the basics of Lie group representations.

Lie Algebras
The elements (generator) ##\mathcal{J}_a## of the Lie algebra ##\mathfrak{so}(N)## may be represented in the vector (fundamental) representation which is ##N##-dimensional, this representation of ##\mathfrak{so}(N)## acts on the ##N##-dimensional vector space ##\mathbb{R}^N##. The key here is that, what furnishes this ##N##-dimensional representation is exactly the ##N##-dimensional vector space ##\mathbb{R}^N##.

One representation that always exists for Lie algebras is the adjoint representation. Without going into the details, the point is that given an element ##\mathcal{J}_a## of ##\mathfrak{so}(N)##, it can be represented by the structure constant ##f_{abc}## such that ##(\mathcal{J}_a)_{bc} = f_{abc}## where ##bc## acts as matrix indices for the generator ##\mathcal{J}_a##. Each of the indices ##abc## runs through ##1 \ldots \frac{1}{2}N(N-1)##, so there are ##\frac{1}{2}N(N-1)## generators whose adjoint representation is ##\frac{1}{2}N(N-1)##-dimensional. Now, what furnishes this adjoint representation? I'll have to be more precise as follows,

Let ##\mathfrak{so}(N)## be a Lie algebra, and let ##\mathfrak{gl}(\mathfrak{so}(N))## be the Lie algebra of linear endomorphisms of ##\mathfrak{so}(N)##. Specifically, the adjoint representation is the mapping,

$$ad: \mathfrak{so}(N) \rightarrow \mathfrak{gl}(\mathfrak{so}(N))$$
$$\qquad \mathcal{J}_a \rightarrow ad_{\mathcal{J}_a} := [\mathcal{J}_a, \cdot]$$

where ##ad_{\mathcal{J}_a} = [\mathcal{J}_a, \cdot]## is the operator on ##\mathfrak{so}(N)## defined by,

$$ad_{\mathcal{J}_a}: \mathfrak{so}(N) \rightarrow \mathfrak{so}(N)$$
$$\qquad \mathcal{J}_b \rightarrow [\mathcal{J}_a, \mathcal{J}_b]$$

What this means is that for a fixed ##a## hence fixed element (generator) ##\mathcal{J}_a## belonging to ##\mathfrak{so}(N)##, it acts on all the basis generators ##\mathcal{J}_b## where ##b## runs through ##1 \ldots \frac{1}{2}N(N-1)##, which gives an element also belonging to ##\mathfrak{so}(N)##. In terms of representations, we interpret the elements of ##\mathfrak{so}(N)## as antisymmetric tensors of dimension ##N##. Now, taking the ##\frac{1}{2}N(N-1)##-dimensional ##(\mathcal{J}_a)_{bc} = f_{abc}## adjoint representation, it acts on the ##\frac{1}{2}N(N-1)## generators each of which is ##N##-dimensional. Thus, these ##\frac{1}{2}N(N-1)## generators which are just elements of ##\mathfrak{so}(N)## is what furnishes the adjoint representation.

This can be realized for an arbitrary antisymmetric tensor ##T^{ij} = \sum_{b=1}^{\frac{1}{2}N(N-1)} A_b \mathcal{J}_b^{ij}## where ##i,j = 1, \ldots, N## and ##b = 1, \ldots, \frac{1}{2}N(N-1)##. This is what I've stated above where each generator in ##\mathfrak{so}(N)## is ##N##-dimensional.

I'll capture what Zee is saying (refer to the screenshots in post #24) and your responses by the following. An antisymmetric tensor transforms as ##\delta T = \theta_a [\mathcal{J}_a, T]##, here we can apply what I've stated above, i.e. ##\mathcal{J}_a## in the adjoint representation acts on ##T## by acting on the basis ##\mathcal{J}_b## where ##T^{ij} = \sum_{b=1}^{\frac{1}{2}N(N-1)} A_b \mathcal{J}_b^{ij}##. So, ##\delta T = \theta_a A_b [\mathcal{J}_a, \mathcal{J}_b] = -\theta_a A_b f_{abc} \mathcal{J}_c##. However, we also know that ##\delta T = \delta A_c \mathcal{J}_c##. Thus, ##\delta A_c = - f_{abc} \theta_a A_b##.

Addressing my confusion above, I thought that I should write ##A_b## as a column vector and the adjoint representation is given by ##\mathcal{J}_b^{ij}## which results to ##T^{ij}## as shown in my post #1, but that is wrong, ##\mathcal{J}_b^{ij}## should not appear since it is the basis itself. Focusing on ##\mathfrak{so}(4)##, the column vector has six entries each entry is supported by the basis ##\mathcal{J}_b^{ij}##, and what appears in the entry should be ##A_b## itself.

$$\begin{bmatrix} A_1 \\\ A_2 \\\ A_3 \\\ A_4 \\\ A_5 \\\ A_6 \end{bmatrix}$$

What transforms this column? From ##\delta A_c = - f_{abc} \theta_a A_b##, it should be the structure constant ##f_{abc}## which is just ##(\mathcal{J}_a)_{bc} = f_{abc}##, but of course there are factors ##-\theta_a## that should be included. This makes sense since this factor depends on ##a##, the chosen fixed element ##\mathcal{J}_a##! So, yes, it's the structure constant that transforms ##A_b##, with some extra constant included. If we choose ##a=1## where ##(\mathcal{J}_1)_{bc} = f_{1bc}## then,

$$-
\begin{bmatrix}

f_{111} \theta_1 & f_{121} \theta_1 & f_{131} \theta_1 & f_{141} \theta_1 & f_{151} \theta_1 & f_{161} \theta_1\\\

f_{112} \theta_1 & f_{122} \theta_1 & f_{132} \theta_1 & f_{142} \theta_1 & f_{152} \theta_1 & f_{162} \theta_1\\\

f_{113} \theta_1 & f_{123} \theta_1 & f_{133} \theta_1 & f_{143} \theta_1 & f_{153} \theta_1 & f_{163} \theta_1\\\

f_{114} \theta_1 & f_{124} \theta_1 & f_{134} \theta_1 & f_{144} \theta_1 & f_{154} \theta_1 & f_{164} \theta_1\\\

f_{115} \theta_1 & f_{125} \theta_1 & f_{135} \theta_1 & f_{145} \theta_1 & f_{155} \theta_1 & f_{165} \theta_1\\\

f_{116} \theta_1 & f_{126} \theta_1 & f_{136} \theta_1 & f_{146} \theta_1 & f_{156} \theta_1 & f_{166} \theta_1

\end{bmatrix} \begin{bmatrix} A_1 \\\ A_2 \\\ A_3 \\\ A_4 \\\ A_5 \\\ A_6 \end{bmatrix}
$$

This adjoint representation matrix transforms the vector ##A_b## to give a new set ##A'_b##. Notice the minus sign in front and the extra factor ##\theta_1##. So this is what is meant when we say that ##A_b## furnishes the adjoint representation. Of course, it is the vector space ##\mathfrak{so}(4)## that furnishes the adjoint representation, but in terms of the six basis generators of ##\mathfrak{so}(4)##, it is realized by the column vector ##A_b##.

I know this is a bit long, but please comment if there are any mistakes.
 

Similar threads

  • Quantum Physics
Replies
34
Views
4K
Replies
1
Views
643
Replies
5
Views
1K
Replies
24
Views
2K
  • Quantum Physics
3
Replies
87
Views
5K
  • Linear and Abstract Algebra
Replies
1
Views
909
Replies
4
Views
1K
  • Differential Geometry
Replies
1
Views
1K
  • Quantum Physics
Replies
11
Views
1K
  • Quantum Physics
Replies
1
Views
1K
Back
Top