Adjoint representation correspondence?

ismaili
Messages
150
Reaction score
0
Dear All,

I'm reading Georgi's text about Lie algebra, 2nd edition.
In chap 6, he introduced "Roots and Weights". What I didn't understand is the discussion of section 6.2 about the adjoint representation. He said: "The adjoint representation, is particularly important. Because the rows and columns of the matrices defined by [T_a]_{bc} = -if_{abc} are labeled by the same index that labels the generators, the states of the adjoint representation correspond to the generators themselves."
The sentence with underline is the point that I didn't understand. Why states of the adjoint representation correspond to the generators? And then he denotes the state correspond to an arbitrary generator X_a as |X_a\rangle, moreover,
\alpha|X_a\rangle + \beta|X_b\rangle = |\alpha X_a + \beta X_b\rangle
Could anybody show me why any state in the adjoint representation would correspond to a generator? Thanks a lot!
 
Physics news on Phys.org
ismaili said:
Dear All,

I'm reading Georgi's text about Lie algebra, 2nd edition.
In chap 6, he introduced "Roots and Weights". What I didn't understand is the discussion of section 6.2 about the adjoint representation. He said: "The adjoint representation, is particularly important. Because the rows and columns of the matrices defined by [T_a]_{bc} = -if_{abc} are labeled by the same index that labels the generators, the states of the adjoint representation correspond to the generators themselves."
The sentence with underline is the point that I didn't understand. Why states of the adjoint representation correspond to the generators? And then he denotes the state correspond to an arbitrary generator X_a as |X_a\rangle, moreover,
\alpha|X_a\rangle + \beta|X_b\rangle = |\alpha X_a + \beta X_b\rangle
Could anybody show me why any state in the adjoint representation would correspond to a generator? Thanks a lot!

Consider a column vector "V" which would be in the adjoint representation. Then we would apply the matrices T_a on v to produce a new vector "v prime". Writing this explicitly, we would have

v'_b = (T_a)_{bc} v_c = -i f_{abc} v_c



Now consider the following. Instead of incorporating the coefficients V_c of the initial vector in a column vector, construct the matrix V_c T_c \equiv \vec{v} \cdot \vec{T}[/tex]. This is now a matrix that represents the <i> state </i> with components V_c (instead of a column vector). <br /> <br /> Now, we want to apply a transformation (using T_a) to this state. Instead of just applying the matrix T_a to our &quot;state&quot;, we will say that to do a transformation, we must take the <i> commutator </i> of the state with the matrix producing the transformation. So we say the the transformed &quot;state&quot; is given by <br /> <br /> (warning: I am doing this by memory, I am pretty sure I will get some minus signs wrong)<br /> <br /> \vec{v}&amp;#039; \cdot \vec{T} = [T_a, v_c T_c] = i v_c f_{acd} T_d<br /> <br /> where I have avoided using an index &quot;b&quot; on the right side to make things more clear.<br /> <br /> Ok, now, let&#039;s say that we want the coefficient v_b&amp;#039; (to compare with the formula obtained above using column vectors). This is the coefficient of the matrix T_b on the left side. So we must set d=b on the right side too. We get<br /> <br /> v&amp;#039;_b = i v_c f_{acb}<br /> <br /> Using the antisymmetry of the structure constants, we finally get<br /> <br /> v&amp;#039;_b = - i f_{abc} v_c<br /> <br /> which is the same result that we obtained using states being column vectors. <br /> <br /> So in this representation, we can use the generators themselves to build the states! <br /> But again, this also implies that we use commutators to &quot;apply operators to the states&quot;.
 
nrqed said:
Consider a column vector "V" which would be in the adjoint representation. Then we would apply the matrices T_a on v to produce a new vector "v prime". Writing this explicitly, we would have

v&#039;_b = (T_a)_{bc} v_c = -i f_{abc} v_c
Now consider the following. Instead of incorporating the coefficients V_c of the initial vector in a column vector, construct the matrix V_c T_c \equiv \vec{v} \cdot \vec{T}[/tex]. This is now a matrix that represents the <i> state </i> with components V_c (instead of a column vector). <br /> <br /> Now, we want to apply a transformation (using T_a) to this state. Instead of just applying the matrix T_a to our &quot;state&quot;, we will say that to do a transformation, we must take the <i> commutator </i> of the state with the matrix producing the transformation. So we say the the transformed &quot;state&quot; is given by <br /> <br /> (warning: I am doing this by memory, I am pretty sure I will get some minus signs wrong)<br /> <br /> \vec{v}&amp;#039; \cdot \vec{T} = [T_a, v_c T_c] = i v_c f_{acd} T_d<br /> <br /> where I have avoided using an index &quot;b&quot; on the right side to make things more clear.<br /> <br /> Ok, now, let&#039;s say that we want the coefficient v_b&amp;#039; (to compare with the formula obtained above using column vectors). This is the coefficient of the matrix T_b on the left side. So we must set d=b on the right side too. We get<br /> <br /> v&amp;#039;_b = i v_c f_{acb}<br /> <br /> Using the antisymmetry of the structure constants, we finally get<br /> <br /> v&amp;#039;_b = - i f_{abc} v_c<br /> <br /> which is the same result that we obtained using states being column vectors. <br /> <br /> So in this representation, we can use the generators themselves to build the states! <br /> But again, this also implies that we use commutators to &quot;apply operators to the states&quot;.
<br /> <br /> Thank you very much!<br /> <br /> So, given a state under some basis in the adjoint representation(then this state is a column vector v_a), equivalently we can get the adjoint representation by forming the linear combination of generators, v_aT_a, to be the states, and by adopting the transformation rule that defined by commutator as you described. <br /> So, in a sense there is a map from states to generators in adjoint representation <br /> <br /> v_a \rightarrow v_aT_a<br /> <br /> And probably this map is one-to-one so that Georgi used the generator to label the state, |X_a\rangle, where this state corresponds to generator X_a.<br /> <br /> I think basically I got the idea. (Is this so trivial? Georgi didn&#039;t state it at all...)<br /> <br /> -------------------------------<br /> <br /> However, when I read section 6.1 just now, I found something I don&#039;t really understand. <br /> He said &quot;<i>Cartan generators can be simultaneously diagonalized. After diagonalization of the Cartan generators, the states of the representation D can be written as |\mu,x,D\rangle, where <br /> H_i|\mu,x,D\rangle = \mu_i|\mu,x,D\rangle\quad---(*)<br /> (H_i is the hermitian Cartan generator) and x is any other label that is necessary to specify the state</i>.&quot;<br /> <br /> My question is, the |\mu,x,D\rangle should be the <b><u>basis</u></b> for the states of the representation, right? Since not all states in the representation can have eq(*), for example, linear combination of |\mu,x,D\rangle can not have eq(*). So, I don&#039;t understand why every state in the representation can be written as |\mu,x,D\rangle? <br /> <br /> Thanks for any illumination!
 
ismaili said:
Thank you very much!

So, given a state under some basis in the adjoint representation(then this state is a column vector v_a), equivalently we can get the adjoint representation by forming the linear combination of generators, v_aT_a, to be the states, and by adopting the transformation rule that defined by commutator as you described.
So, in a sense there is a map from states to generators in adjoint representation

v_a \rightarrow v_aT_a

And probably this map is one-to-one so that Georgi used the generator to label the state, |X_a\rangle, where this state corresponds to generator X_a.

I think basically I got the idea. (Is this so trivial? Georgi didn't state it at all...)

You are welcome. Well, the first time I "got it", I did not find it a trivial concept.:smile:

I have to say that I never really found Georgi's book to be very clear. I tried to learn from him a few times and ended up using other references.

-------------------------------

However, when I read section 6.1 just now, I found something I don't really understand.
He said "Cartan generators can be simultaneously diagonalized. After diagonalization of the Cartan generators, the states of the representation D can be written as |\mu,x,D\rangle, where
H_i|\mu,x,D\rangle = \mu_i|\mu,x,D\rangle\quad---(*)
(H_i is the hermitian Cartan generator) and x is any other label that is necessary to specify the state
."

My question is, the |\mu,x,D\rangle should be the basis for the states of the representation, right? Since not all states in the representation can have eq(*), for example, linear combination of |\mu,x,D\rangle can not have eq(*). So, I don't understand why every state in the representation can be written as |\mu,x,D\rangle?

Thanks for any illumination!

Yes, he means the basis states.
 
Hi,
nrqed said:
I tried to learn from him a few times and ended up using other references.
I just got this book (Georgi's) .. I don't find it very clear too.
Could you tell me what are these ''other references'' you found useful ?

I have also "Relativity, Groups, Particles - Saxl/Urbantke".. looks interesting.

Thanks.
 
Atakor said:
Hi,

I just got this book (Georgi's) .. I don't find it very clear too.
Could you tell me what are these ''other references'' you found useful ?

I have also "Relativity, Groups, Particles - Saxl/Urbantke".. looks interesting.

Thanks.

As a starting point I think that nothing beats Greiner's book ''Symmetries'' (I think that the full title is ''Quantum Mechanics: Symmetries''). There is an appendix that discusses roots, weights, the Cartan classification and so on and it is very clear. If you find good references, let us know!
 
Other useful references are:

* Cahn's semisimple lie algebras book. It's now available free from the author at http://phyweb.lbl.gov/~rncahn/www/liealgebras/book.html , or alternately via a very inexpensive Dover edition.

* Jan Gutowski's "Part III: Symmetries and Particle Physics" notes, available on his webpage: http://www.mth.kcl.ac.uk/~jbg34/Site/Dr._Jan_Bernard_Gutowski.html
 
Last edited by a moderator:
Hi,
Thanks for the ref's.

I asked also a theoretical physicist who's basically "speaking groups"..
according to him and his friends, there's no good self-contained references for group theory (for physicists)..
but he thinks that Georgi's is the best one.. when used with Group Theory and Its Application to Physical Problems- Morton Hamermesh.
I just ordered the latter one..
 

Similar threads

Replies
3
Views
3K
Replies
3
Views
1K
Replies
2
Views
5K
Replies
6
Views
2K
Replies
10
Views
3K
Replies
5
Views
2K
Replies
5
Views
2K
Back
Top