SU(3) Cartan Generators in Adjoint Representation

Click For Summary
SUMMARY

The discussion focuses on calculating the weights of the adjoint representation of SU(3) using the Cartan generators T1 and T2, derived from the structure constants of the Gell-Mann matrices. The user encounters issues with diagonalizing the matrices separately, leading to mismatched eigenvectors. They reference Zee's lecture for guidance and explore alternative methods for calculating eigenvalues, including using WolframAlpha for diagonalization. The conversation emphasizes the complexity of working with SU(3) compared to simpler Lie algebras like SU(2).

PREREQUISITES
  • Understanding of Lie algebras and their representations
  • Familiarity with SU(3) and Gell-Mann matrices
  • Knowledge of diagonalization techniques for matrices
  • Experience with commutation relations in quantum mechanics
NEXT STEPS
  • Study the diagonalization of matrices using WolframAlpha for complex representations
  • Learn about the structure constants of SU(3) and their implications
  • Explore the Borel subalgebra and its role in representation theory
  • Investigate the relationship between eigenvalues and eigenvectors in Lie algebra representations
USEFUL FOR

The discussion is beneficial for theoretical physicists, mathematicians, and graduate students specializing in quantum mechanics, representation theory, or particle physics, particularly those working with SU(3) and its applications in gauge theories.

nigelscott
Messages
133
Reaction score
4
I am trying to work out the weights of the adjoint representation of SU(3) by calculating the 2 Cartan
generators as follows:

I obtain the structure constants from λa and λ8 using:

ab] = ifabcλc

I get:

f312 = 1
f321 = -1
f345 = 1/2
f354 = -1/2
f367 = -1/2
f376 = 1/2

f845 = √3/2
f854 = -√3/2
f876 = -√3/2
f867 = √3/2


I construct the matrices from:

(T1)ab = -if3ab and (T2)ab = -if8ab

I then diagonalize the resulting matrices and get the eigenvalues::

T1 = diag(-1/2,-1/2,1,0,0,-1,1/2,1/2)

T2 = diag(-√3/2,-√3/2,0,0,0,0,√3/2,√3/2)


Which is close but not correct. I think the problem may be with diagonalizing the matrices individually
rather than simultaneously as the eigenvectors don't match. Before, I begin that journey, however, I wanted
to samity check what I am doing. I am following Zee's lecture which can be found at:

https://www.youtube.com/watch?v=u-g9hzDByJ8 minute ~ 5

Any help would be appreciated.
 
Physics news on Phys.org
May I ask you why you want to do this? There's a reason why textbooks use ##\mathfrak{sl}(2)##, resp. ##\mathfrak{su}(2)##, as example. ##\mathfrak{su}(3)## is a case where it really has to be considered to write a computer program instead. I assume the CSA is spanned by the diagonal matrices ##i \lambda_3, i\lambda_8## and would as a first step try to find a basis for the Borel subalgebra (maximal solvable subalgebra). Then you know at least the ascending and descending elements. You can find an educated guess here. To work out the entire example with structure constants is quite troublesome, so I would remain as general as long as possible, i.e. write down the equations by using basis vectors, rather than structure constants. Besides all this, I guess the representations could be found on the internet.
 
Yes, your point is well taken. I realize that this is a tedious approach and the eigenvalues can be found more easily using [Hi,Eα] = αiEα where Eα are the I, U and V spin operators. However, my approach should work correct? In the defining rep the 3 x 3 Cartan generators share the same eigenbasis and so I would expect that the 8 x 8's would too. However, I am not seeing that. Either my structure constants are wrong or my approach is not valid. I used λ not iλ in my calculations. Could that be a problem?
 
nigelscott said:
Yes, your point is well taken. I realize that this is a tedious approach and the eigenvalues can be found more easily using [Hi,Eα] = αiEα where Eα are the I, U and V spin operators. However, my approach should work correct? In the defining rep the 3 x 3 Cartan generators share the same eigenbasis and so I would expect that the 8 x 8's would too. However, I am not seeing that. Either my structure constants are wrong or my approach is not valid. I used λ not iλ in my calculations. Could that be a problem?
It shouldn't be a problem. I just find it confusing to work with Lie algebras and their representations and the theorems available, but without having one. The Gell-Mann matrices are simply not skew Hermitian, which should affect the eigenvalues, so caution is required.

Wikipedia has the structure constants, so you can check them. I probably would just calculate the ##[\lambda_\alpha,\lambda_\beta]##. Should be easy and the imaginary factor can be easily applied as well, so you can have both in parallel, or look them up.

I still find it more convenient to work with ##[H^i,E_\alpha].v = E_\alpha.v## to get to ##H^i(E_\alpha.v)=(\alpha^i+\mu_i)E_\alpha.v## rather than the whole thing in coordinates. I only renamed the eigenvalues ##\mu_i## to avoid confusion with the Gell-Mann matrices. And if the representation isn't the natural one, you'll need this more general approach anyway. However, you said to use only the adjoint representation. In this case: why don't you just do matrix multiplications? ##\operatorname{ad} i\lambda_\alpha . i\lambda_\beta = - [ \lambda_\alpha , \lambda_\beta ] = \sum f_{\alpha \beta}^k (i\lambda_k)## gives you eight nice ##8 \times 8## matrices. I only would avoid to calculate ##\operatorname{Ad}## by exponentiation.

##\operatorname{ad}i \lambda_3## and ##\operatorname{ad} i \lambda_8## two are commuting semisimple linear transformations, so it should be no problem to verify the basis vectors of the eigenspaces.
 
I am still struggling with interpreting the results I get. For ad( H1) and ad( H2) I get:

adjoint.jpg

Diagonalization using WolframAlpha gives:

ad(H1): diag(-1,-1/2,-1/2,,0,0,1/2,1/2,1)

ad(H2): diag(0,0,0,0,-√3/2,-√;-3/2,√;3/2,√;3/2)

1. All the weights are there but not in the correct order.
2. 6/8 eigenstates of ad(H1) and ad(H2) match. 2 are different.

I still can't figure out what is going on.

WolframAlpha diagonalization links:

ad(H1): http://www.wolframalpha.com/input/?...,0},+{0,0,0,0,0,-i/2,0,0},+{0,0,0,0,0,0,0,0}}

ad(H2): http://www.wolframalpha.com/input/?...0,0,0,0,0,i√3/2,0,0},{0,0,0,0,0,0,0,0}}
 

Attachments

  • adjoint.jpg
    adjoint.jpg
    16.3 KB · Views: 1,086
Here is what I have.

I used the basis conventions at the end of https://www.physicsforums.com/insights/representations-precision-important/ , i.e. the numeration of the ##\lambda -##matrices, as well as
$$\left\{ \,H_1:=T_3=\frac{1}{2}\lambda_3\; , \;H_2:=Y\; , E_{\pm\alpha} :=iT_{\pm}\; , \;E_{\pm\beta} :=iU_{\pm}\; , \; E_{\pm(\alpha +\beta)}:=iV_{\pm}\;\,\right\}$$

Because it's far easier to calculate commutators, I set ##e_{ij}## to be the matrix with a single ##1## at position ##(i,j)## and ##0## elsewhere. This gave me
$$H_1=\frac{1}{2}e_{11}-\frac{1}{2}e_{22}\; , \;H_2=\frac{1}{3}e_{11}+\frac{1}{3}e_{22}-\frac{2}{3}e_{33} $$
$$
E_\alpha =ie_{12}\; , \;E_{-\alpha}=ie_{21}\; , \;E_{\beta}=ie_{23}\; , \;E_{-\beta}=ie_{32}\; , \;E_{\alpha + \beta}=ie_{13}\; , \;E_{-\alpha - \beta}=ie_{31}
$$
in the usual notation of the Cartan subalgebra and the corresponding root vectors.

If I made no sign errors and no typos, I had the following multiplications:
##\operatorname{ad}H_1.(E_{\alpha},E_{-\alpha},E_{\beta},E_{-\beta},E_{\alpha + \beta},E_{-\alpha - \beta}) = (E_{\alpha},-E_{-\alpha},-\frac{1}{2}E_{\beta},\frac{1}{2}E_{-\beta},\frac{1}{2}E_{\alpha + \beta},-\frac{1}{2}E_{-\alpha - \beta})##
##\operatorname{ad}H_2.(E_{\alpha},E_{-\alpha},E_{\beta},E_{-\beta},E_{\alpha + \beta},E_{-\alpha - \beta}) = (0,0,E_{\beta},-E_{-\beta},\sqrt{3}E_{\alpha + \beta},-\sqrt{3}E_{-\alpha - \beta})##
which looks good so far. I also have ##[E_{\pm \alpha\, , \,E_{\pm \beta}}] = \pm i \cdot E_{\pm (\alpha + \beta)}## as a kind of parity check, and I didn't calculate the rest.

If the ##E_\alpha## should be made into ##SU(3)## elements, so probably ##E_\alpha \longrightarrow i \cdot \lambda = E_\alpha + E_{-\alpha}\; , \; E_{-\alpha} \longrightarrow E_\alpha - E_{-\alpha}## etc. should be the right choices. Otherwise with the choices I've made, we only get an isomorphic version of it, i.e. still the same Lie algebra, but not skew-Hermitian anymore. But as a Lie algebra, my non-Hermitian basis is eventually easier to handle.
 
Last edited:

Similar threads

  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 2 ·
Replies
2
Views
3K
  • · Replies 1 ·
Replies
1
Views
3K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 9 ·
Replies
9
Views
2K
  • · Replies 2 ·
Replies
2
Views
4K
  • · Replies 12 ·
Replies
12
Views
4K
  • · Replies 2 ·
Replies
2
Views
2K
  • · Replies 27 ·
Replies
27
Views
3K
Replies
5
Views
2K