Georgi Lie Algebras In

Gold Member
Georgi "Lie Algebras In..."

It is a old book, I took it from the library two days ago. And I am ashamed that my instructor did not suggest it during our undergraduate group theory.

The selection of exersices is very good. And CarlB will enjoy the one at the end of first chapter: to show that the natural representation in 3 dimensions of the group of permutations of three elements is reducible, and to write down the resulting 1 dim and 2 dim irreps.

CarlB
Homework Helper
I've bought and lost 2 copies of this excellent book. The only thing I can warn you about is that it somehow seems to turn up missing just when you decide that you need to read it once again. I bought and I think I'm still in possession of a 3rd copy. But the last I looked around for it...

Since the standard model (along with relativity) is built on symmetry, Lie algebras have to be understood at a deep level. Of course I reject these foundations and am busy replacing symmetry with geometry. Geometry is much more restrictive than symmetry. This makes packing the experimental observations into the box of the theory much easier when the theory is based on symmetry.

But it's a lot easier to make calculations in geometry than in symmetry because in geometry the language is unified. As an example, consider the rotation and boost transformations. Rotation by theta around the z axis by an angle theta is:

$$M \to e^{-\theta\hat{x}\hat{y}/2} M e^{+\theta\hat{x}\hat{y}/2}$$

where $$\hat{x} = \gamma_1, \hat{y} =\gamma_2,$$ etc.

A boost by eta in the z direction is:

$$M \to e^{-\eta\hat{z}\hat{t}/2} M e^{+\eta\hat{z}\hat{t}/2}$$

The above transformations are by "bivectors", that is, they are induced by the product of two vectors, for example $$\hat{x}\hat{y}$$. The thing about bivectors is that when you multiply a bivector by a vector only two things can happen. If the bivector does not contain the vector, then they commute and the exponentials above cancel. If the bivector does contain the vector, then they anticommute and when you commute one of the exponentials to the other side its sign is changed, which is why there is a factor of 1/2.

In the geometric language, the discrete symmetries, such as parity, also can be put into infinitesimal form. This means that you can cross the boundary between different superselection sectors with geometric transformations. For parity, the generator of the parity transformation is $$\gamma_0 = \hat{t}.$$ In a theory based on operators only, one cannot transform spinors, but must always instead deal with the initial and final states as one item. For the spinor theory, the parity transformation modifies operators by $$\gamma_0 M gamma_0$$. In the geometric language, this can be put into the same form as the rotations and boosts (and therefore unified with them):

$$M \to e^{-(\pi/2)\hat{t}/2} M e^{+(\pi/2)\hat{t}/2}$$

where it's quite possible I've left off a factor of i, depending on how you prefer your signatures. This makes the geometric language much easier to understand than Georgi's book. Part of the reason I'm typing up my notes is to try and put together a set of notes that allow a graduate student to learn the subject with the same ease that Georgi's book has done for the past 30 years.

The exponentials above are the Lie group elements. The Lie algebra elements are just the bivectors for the rotations and boosts, and $$\gamma_0$$ for the parity transformation. In other words, in addition to unifying the discrete and continuous quantum symmetries, one also puts the algebras and the groups into the same mathematical object. It's far more elegant this way than the usual.

Last edited:
Gold Member
CarlB said:
I've bought and lost 2 copies of this excellent book. The only thing I can warn you about is that it somehow seems to turn up missing just when you decide that you need to read it once again. I bought and I think I'm still in possession of a 3rd copy. But the last I looked around for it...
:rofl: :rofl: :rofl: :rofl:

In the geometric language, the discrete symmetries, such as parity, also can be put into infinitesimal form
.

Hmm perhaps infinitesimal is not the right word. In fact it is not the case that you put a symmetrie into infinitesimal form, but that you generate a (one parametric line of) continous symmetry by exponentiation the so called "infinitesimal generator".

There is some povety in this language of generators, I agree. For instance Newton's proof of conservation of angular momentum is more general than Noether's.

Demystifier
Gold Member
Personally, I do not like this book. As a book with a similar content I like
R. N. Cahn, Semi-Simple Lie Algebras and Their Representations (1984).

Is there an errata for this book (2nd edition)? I am wondering about some of the things that are stated just in the first few pages. For instance, he says "This is $$Z_{3}$$, the cyclic group of order 3." then stuff about defining a representation and then "the dimension of a representation is the dimension of the space on which it acts - the representation (1.4 [for $$Z_{3}$$]) is 1 dimensional."

Why isn't the representation 3 dimensional? And if it is, where can I find an errata sheet for this book?

Last edited:
CarlB
Homework Helper

Why isn't the representation 3 dimensional?

The representation is over the complex numbers; that's why it's 1 dimensional. A 3 dimensional rep would involve 3x3 matrices. (And hey, don't be discouraged it's a good book.)

George Jones
Staff Emeritus
Gold Member

Z_3 is the cyclic group of order 3. What is the space on which representation (1.4) acts? What is the dimension of this space?

The representation is over the complex numbers; that's why it's 1 dimensional. A 3 dimensional rep would involve 3x3 matrices. (And hey, don't be discouraged it's a good book.)

Why did he say "the dimension of a representation is the dimension of the space on which it acts"? I would say that it 'exists' in 1 dimension, but what is it acting on?

I suppose, if elements of a representation only act on other elements of the same representation, then D(a)[D(b)] would be thought of as D(a) acting on D(b) and since D(b) is a complex number then 'the representations act on complex numbers' (then look at all permutations of a,b,e) -> one dimensional representation. But this doesn't work for the regular representation as long as matrices aren't 3 dimensional things (they don't look like 3 dimensional things to me because of their 9 entries (you do have to take into account the fact that they have to be invertible, but how can that reduce d from 9 to 3?).

So now going by Georgi's hint, it seems we must force the things that the regular representation acts on to be 3 dimensional vectors, because, by definition, the representation would be 3 dimensional. But, why not have it act on 3x3 matrices? These can be perfectly good vector/inner product spaces - you can define the trace as the inner product and all that stuff.

Another thing, complex numbers can 'act on' most any vector space, by scalar multiplication...etc.

Basically, I am not seeing how this stuff is all locked down and obvious that the dimensions are 1 and 3 for the two different representations.

Edit in bold.

Last edited:

Georgi has not fully defined his space in the example. It is clear that the complex plane has two real dimensions, but also one complex dimension. In the complex plane, the three elements of his set also live on the unit circle which has one real dimension (I suspect that is the space he refers to).

Finally, a few remarks about the dimension of SO(N), the real orthonormal NxN matrices. It should be obvious that the 2x2 orthonormal matrices form a 1 dimensional space for instance : they are parameterized by one real angle. To define a rotation for a 3-vector, one needs just two angles for the direction plus one angle for the magnitude of the rotation (so the space is at most 3 dimensional, it may be intuitive that you can not do with less parameters). In general, a NxN orthonormal matrices belongs to the NxN dimensional space of general matrices and has in addition a set of constraints for orthonormality : the columns of the matrix form N orthonormal vectors. There are N conditions for the unity of each vector and N(N-1)/2 conditions of orthogonality, so the final dimension of the SO(N) space is
NxN - N - N(N-1)/2 = N(N-1) - N(N-1)/2 = N(N-1)/2
plug in N=3 and you get 3x2/2 = 3 dimensions.

CarlB
Homework Helper

Georgi has not fully defined his space in the example. It is clear that the complex plane has two real dimensions, but also one complex dimension. In the complex plane, the three elements of his set also live on the unit circle which has one real dimension (I suspect that is the space he refers to).

Yeah, Georgi is a little sloppy; one has to look to the examples to understand the book.

Meanwhile, I'm planning on finally finishing off my "ABD" next year.

Georgi has not fully defined his space in the example. It is clear that the complex plane has two real dimensions, but also one complex dimension. In the complex plane, the three elements of his set also live on the unit circle which has one real dimension (I suspect that is the space he refers to).

Finally, a few remarks about the dimension of SO(N), the real orthonormal NxN matrices. It should be obvious that the 2x2 orthonormal matrices form a 1 dimensional space for instance : they are parameterized by one real angle. To define a rotation for a 3-vector, one needs just two angles for the direction plus one angle for the magnitude of the rotation (so the space is at most 3 dimensional, it may be intuitive that you can not do with less parameters). In general, a NxN orthonormal matrices belongs to the NxN dimensional space of general matrices and has in addition a set of constraints for orthonormality : the columns of the matrix form N orthonormal vectors. There are N conditions for the unity of each vector and N(N-1)/2 conditions of orthogonality, so the final dimension of the SO(N) space is
NxN - N - N(N-1)/2 = N(N-1) - N(N-1)/2 = N(N-1)/2
plug in N=3 and you get 3x2/2 = 3 dimensions.

Well, that is different! He never said in the book that we didn't want stretches or reflections. However, he does say 'Take the group elements themselves to be orthonormal basis vectors for a vector space...' but that doesn't really clue me in to what the $$D(g_{1})$$ in $$D(g_{1})|g_{2}\rangle=|g_{1}g_{2}\rangle$$ is, it just tells me what the relationship between all of the $$|g_{2}\rangle$$'s is. Sure, for the specific case given, I could figure out the matrices using aa=b, bb=a, ee=e, ab=e, ba=e, ea=a,ae=a...etc and then use e=(1,0,0), a=(0,1,0), b=(0,0,1). If I do that, the representation 'works'. I say 'works' because that wasn't the definition of a representation to begin with, he chooses this

$$D(g_{1})|g_{2}\rangle=|g_{1}g_{2}\rangle$$

but the real definition for the representation was something that did this

$$D(g_{1})D(g_{2})=D(g_{1}g_{2})$$

Where is the switch up? What is this 'trick'?

$$D(g_{1})|g_{2}\rangle=|g_{1}g_{2}\rangle$$
This is the definition of the regular representation. He discusses it after the dimension 1 of his Z3 representation.
$$D(g_{1})D(g_{2})=D(g_{1}g_{2})$$
That is the general definition for any representation.

He mentions also that the regular representation has dimension the order of the group (that is pretty obvious). That does not look very economical at first sight. If you keep reading, you will see that the regular representation is very useful, to begin with as you said, because it can be constructed directly from the multiplication table. Then apply the technology (or have a computer do it for you) to analyze the general properties of any representation !

The thing that is really bothering me is the change in the type of quantity on each side of the equals sign. We start out basically by saying the D's are matrices, fine good! But then we use an identical looking relationship except that some of the D's are 'turned' into vectors. And when you actually do the D*vector, you end up with a vector. So, on one hand you have (D*v=v) vector = vector, but on the other, (D*D=D) matrix = matrix. Why do we do this? What is the justification? Yes, we can just say 'we define it that way', but why does it help to define it that way?

So, just thinking here, it is plain to see that each of those matrices either raises all rows (top going to bottom, all other up), lowers all rows (with bottom to top, all others go down), or leaves them the same. Each row of the representation matrices only has one element so that multiplication never results in 'mixing', which means entries only get moved around, not changed. It is trivial that this row shifting will work on any 3x(whatever) matrix, not just a 3x3, or 3x1 - shifting rows is shifting rows. Is this the only reason for defining that vector equation of the standard representation - that it shifts rows?

If that is the case, now it is obvious to me why a regular representation using NxN matrices has dimension of N. Because, with N rows in the stuff you are acting on, there are N shifts you can do (down one, down 2,....,down N-1, down N[same as not shifting]). And it is obvious why it works, D(e) doesn't shift, D(a) goes down one, D(b) goes down two (same as going up one).

So,to check:
D(e)D(e)=D(e), no shift then no shift = yep, no shifts.
D(a)D(a), is two down, which brings any column (r,s,t) to (s,t,r), or is the same as moving up 1 -> D(a)D(a)=D(b)...works.
Same with the others.

This shows the behavior, but still, why is the regular representation the row shifting representation? Why is that a good thing? I am continuing to read the book, I just don't see the point of the regular representation yet, I guess.

Last edited:

You are on the right track : the regular representation turns a vector equality (apply operator to vector gets another vector) into an operator equality (the product of two operators gives another operator).

Do not expect to get everything at once. Again, the good thing about the regular representation is that it is straightforward to build, later on you will learn how to extract the essential information about any representation from a given specific one (chapter roots and weight). It's a long book and an old story. You may also want to use another book, despite the quality of the one you have at hand, other points of view are always helpful.

CarlB
Homework Helper

I bought this book the summer before I started physics grad school and didn't understand it past the first chapter. That fall I signed up for a 1 credit reading course to read it. With a few questions answered once per week it was very easy to understand. Huge difference from self study. I think it would be possible to write a book that worked better for self study.

Yeah, that is basically what I am doing, trying to get a good viewpoint before I start grad work. I'll try to give it a more theoretical read-through and worry about specific details later. Thanks for the help so far. Also, what other books would you recommended?

CarlB
Homework Helper

Also, what other books would you recommended?

The funny thing is that while I more or less majored in symmetry in grad school, since then I've drifted to a different assumption about the foundations of elementary particles, geometry. Symmetry is very general; following Noether, any reasonable differential equation has some symmetry (translational and rotational at least). So physics, like a practical engineer, takes data and then looks to see if a symmetry can be fit to it.

But I eventually realized that there are very simple differential equations which have rather complicated symmetries. In fact this is the usual case. An example is Newton's gravitation; the differential equation is much simpler than the equations for the conserved quantities (i.e. energy, angular momentum).

And differential equations have information not present in their symmetries. For example, symmetry will give you the structure of the excitations of hydrogen by orbital angular momentum, but to actually get the energy levels you need to guess the differential equation, that is, the Coulomb interaction.

If it's the case that the underlying differential equation is simpler than the symmetries (and approximate symmetries) it produces, then it might be possible to guess that differential equation. One would then check that the guess produces the observed symmetries and use the guess to evaluate things that are now considered arbitrary constants determined by experiment.

So I decided that an under researched (and therefore easily published) idea was to try to describe the elementary particles entirely from geometry and quantum mechanics with no use of symmetry other than the symmetry of space-time (or maybe just space). For example, see my recently accepted paper proposing a relationship between spin-1/2 and the 3 generations. This is quantum field theory, but on a finite space instead of the usual infinite position / momentum space. I think this makes it an easier intro to QFT:
http://arxiv.org/abs/1006.3114

I started writing a book giving a geometric foundation to the standard model but I quit working on it when I concluded that no one was going to read it until I published the details in peer reviewed journals. So it's kind of dated and incomplete, but it gives a good explanation of what spinors really are, from a geometric point of view, and it is intended to be understood by beginning grad students:
www.brannenworks.com/dmaa.pdf

I should probably add that I didn't finish my PhD. I passed the examinations (at U. Cal., Irvine) but couldn't decide on a thesis and left to do engineering. I just took the general GREs and intend on starting grad school again in fall 2011.