Recognitions:
Gold Member
Staff Emeritus

## [SOLVED] Group Theory For Dummies

SO(3) is defined to be the space of all 3x3 real matrices G such that:

Gt = G-1
det G = 1

So what about its corresponding Lie Algebra so(3)? It is the set of all 3x3 matrices A such that exp(A) is in SO(3).

So how do the constraints on SO(3) translate to constraints on so(3)?

The second condition is easy. If A is in so(3), then:

exp(tr A) = det exp(A) = 1

so tr A must be zero. Conversely, for any matrix A with tr A zero, the second condition will be satisfied.

The first one is conceptually just as simple, but technically trickier. Translated into so(3) it requires:

exp(A)t = exp(A)-1
exp(At) = exp(-A)
*** this step to be explained ***
At = -A

Therefore if A is in so(3) then A must be skew symmetric. And conversely, it is easy to go the other way to see that any skew symmetric matrix A satisfies the first condition.

Therefore, so(3) is precisely the set of 3x3 traceless skew symmetric matrices.

I skipped over a technical detail in the short proof above. If exponents are real numbers then the marked step is easy to justify by taking the logarithm of both sides... however logarithms are only so nice when we're working with real numbers!!! I left that step in my reasoning because you need it when working backwards.

The way to prove it going forwards is to consider:

exp(s At) = exp(-s A)

If A is in so(3), then this must be true for every s, because so(3) forms a real vector space. Now, we differentiate with respect to s to yield:

(At) exp(s At) = (-A) exp(-s A)

Which again must be true for all s. Now, plug in s = 0 to yield:

At = -A

This trick is a handy replacement for taking logarithms!!!

Anyways, we've proven now that so(3) is precisely all 3x3 real traceless skew symmetric matrices. In fact, we can drop "traceless" because real skew symmetric matrices must be traceless.

For matrix algebras we usually define the lie bracket as being the commutator:

[A, B] = AB - BA

I will now do something interesting (to me, anyways); I will prove that so(3) is isomorphic (as a Lie Algebra) to R3 where the lie bracket is the vector cross product!

The first thing to do is find a (vector space) basis for so(3) over R. The most general 3x3 skew symmetric matrix is:

Code:
/  0  a -b \
| -a  0  c |
\  b -c  0 /
Where a, b, and c are any real number. This leads to a natural choice of basis:

Code:
    /  0  0  0 \
A = |  0  0 -1 |
\  0  1  0 /

/  0  0  1 \
B = |  0  0  0 |
\ -1  0  0 /

/  0 -1  0 \
C = |  1  0  0 |
\  0  0  0 /
As an exercise for the reader, you can compute that:
AB - BA = C
BC - CB = A
CA - AC = B

So now I propose the following isomorphism &phi from so(3) to R3:

&phi;(A) = i
&phi;(B) = j
&phi;(C) = k

And this, of course, extends by linearity:

&phi;(aA + bB + cC) = ai + bj + ck

So now let's verify that this is actually an isomorphism:

First, the vector space structure is preserved; &phi is a linear map, and it takes a basis of the three dimensional real vector space so(3) onto a basis of the three dimensional real vector space R3, so &phi; must be a vector space isomorphism.

The only remaining thing to consider is whether &phi preserves lie brackets. We can do so by considering the action on all pairs of basis elements (since the lie bracket is bilinear)

&phi;([A, A]) = &phi(AA - AA) = &phi;(0) = 0 = i * i = [i, i] = [&phi;(A), &phi;(A)]
(and similarly for [B, B] and [C, C])
&phi;([A, B]) = &phi;(AB - BA) = &phi;(C) = k = i * j = [i, j] = [&phi;(A), &phi;(B)]
(and similarly for other mixed pairs)

So we have verified that so(3) and (R3, *) are isomorphic as Lie Algebras! If we so desired, we could then choose (R3, *) as the Lie Algebra associated with SO(3), and define the exponentional map as:

exp(v) = exp(&phi;-1(v))

So, for example:

exp(tk) = rotation of t radians in the x-y plane
 Recognitions: Gold Member Science Advisor Staff Emeritus Hurkyl: Great post! - Warren

Recognitions:
Gold Member
 Originally posted by chroot Hurkyl: Great post! - Warren
I agree!
 Recognitions: Gold Member Science Advisor Staff Emeritus Bah, there are no blushing emoticons!! Thanks guys! I'm not entirely sure where to go from here, though, since I'm learning it with the rest of you! (so if any of you have things to post, or suggestions on which way we should be studing, feel free to say something! [:)]) But I did talk to one of my coworkers and got a three hour introductory lecture on Lie Groups / Algebras in various contexts, and I think going down the differential geometry route would be productive (and it allows us to keep the representation theory in the representation theory thread!)... I think we are almost at the point where we can derive Maxwellean Electrodynamics as a U(1) gauge theory (which will motivate some differential geometry notions in the process), but I wanted to work out most of the details before introducing that. Anyways, my coworker did suggest some things to do in the meanwhile; we should finish deriving the Lie algebras for the other standard Lie groups, such as su(2), sl(n; C), so(3, 1)... so I assign that as a homework problem for you guys to do in this thread! [:)]

 I think we are almost at the point where we can derive Maxwellean Electrodynamics as a U(1) gauge theory
As a what now?
 Recognitions: Gold Member Science Advisor Staff Emeritus More talking with him indicates he may have been simplifying quite a bit when he brought up Maxwell EM. I'll let someone else explain what "gauge theory" means in general; I'm presuming I'll understand the ramifications after I work through the EM exercise, but I haven't done that yet. [:)]
 Recognitions: Gold Member Science Advisor Staff Emeritus Just to help motivate the thread, I'll find su(n). [size=large]Lie algebra of U(n)[/size] First, as a reminder, we know that U(n) is the unitary group of n x n matrices. You should program the word 'unitary' into your head so it reminds you of these conditions: 1) Multiplication by unitary matrices preserves the complex inner product: = = [sum]i xi* yi, where A is any member of U(n), x and y are any complex vectors, and * connotes complex conjugation. 2) A* = A-1 3) A* A = I 4) |det A| = 1 Now, to find u(n), the Lie algebra of the Lie group U(n), I'm going to follow Brian Hall's work on page 43 of http://arxiv.org/math-ph/0005032 Recall that we can represent any1 member of a matrix Lie group G by an exponentiation of a member of its Lie algebra g. In other words, for all U in U(n), there is a u in u(n) such that: exp(tu) = U where exp is the exponential mapping defined above. Thus exp(tu) is a member of U(n) when u is a member of u(n), and t is any real number. Now, given that U* = U-1 for member of U(n), we can assert that (exp(tu))* = (exp(tu))-1 Both sides of this equation can be simplified. The left side's conjugation operator can be shown to "fall through" the exponential, and the left side is equivalent to exp(tu*). Similarly, the -1 on the right side falls through, and the right side is equivalent to exp(-tu). (Exercise: it's easy and educational to show that the * and -1 work this way.) We thus have a simple relation: exp(tu*) = exp(-tu) As Hall says, if you differentiate this expression with respect to t at t=0, you immediately arrive at the conclusion that u* = -u Matrices which have this quality are called "anti-Hermitian." (the "anti" comes from the minus sign.) The set of n x n matrices {u} such that u* = -u is the Lie algebra of U(n). Now how about su(n)? [size=large]Lie algebra of SU(n)[/size] SU(n) is a subgroup of U(n) such that all its members have determinant 1. How does this affect the Lie algebra su(n)? We only need to invoke one fact, which has been proven above. The fact is: det(exp(X)) = exp(trace(X)) If X is a member of a Lie algebra, exp(X) is a member of the corresponding Lie group. The determinant of the group member must be the same as e raised to the trace of the Lie algebra member. In this case, we know that all of the members of SU(n) have det 1, which means that exp(trace(X)) must be 1, which means trace(X) must be zero! You can probably see now how su(n) must be. Like u(n), su(n) is the set of n x n anti-Hermitian matrices -- but with one additional stipulation: members of su(n) are also traceless. 1You can't represent all group members this way in some groups, as has been pointed out -- but it's true for all the groups studied here. - Warren edit: A few very amateurish mistakes. Thanks, lethe, for your help.

Recognitions:
Gold Member
The weather's been pretty hot and chroot's derivation of su(n) is really neat and clear so I'm thinking I will just be shamelessly lazy and quote Warren with modifications to get sl(n, C).

I see that he goes along with Brian Hall and others in using lower case to stand for the Lie Algebra of a group written in upper case. So su(n) is the L.A. that belongs to SU(n).

In accord with that notation, sl(n,C) is the L.A. that goes with the group SL(n,C), which is just the n x n complex matrices with det = 1. Unless I am overlooking something, all I have to do is just a trivial change in what Warren already did:

 Originally posted by chroot, with minor change for SL(n, C) [size=large]Lie algebra of SL(n, C)[/size] SL(n, C) is a subgroup of GL(n, C) such that all its members have determinant 1. How does this affect the Lie algebra sl(n, C)? We only need to invoke one fact, which has been proven above. The fact is: det(exp(X)) = exp(trace(X)) If X is a member of a Lie algebra, exp(X) is a member of the corresponding Lie group. The determinant of the group member must be the same as e raised to the trace of the Lie algebra member. In this case, we know that all of the members of SL(n, C) have det 1, which means that exp(trace(X)) must be 1, which means trace(X) must be zero! ...sl(n, C) is the set of n x n complex matrices but with one additional stipulation: members of sl(n, C) are...traceless.
That didnt seem like any work at all. Even in this heat-wave.
Hurkyl said to give the L.A. of SO(3,1) so maybe i should do that to so as not to look like a slacker. Really like the clarity of both Hurkyl and Chroot style.

I guess Lethe must have raised the "topologically connected" issue. For a rough and ready treatment, I feel like glossing over manifolds and that but it is nice to picture how the det = 0 "surface" slices the GL group into two chunks...

Because "det = 0" matrices, being non-invertible, are not in the group!

...so that only those with det > 0 are in the "connected component of the identity". The one-dimensional subgroups generated by elements of the L.A. are like curves radiating from the identity and they cannot leap the "det = 0" chasm and reach the negative determinant chunk.

Now that I think of it, Lethe is here and he might step in and do SO(3,1) before I attend to it!

Recognitions:
Gold Member
Hurkyl has a notion of where to go. I want to follow the hints taking shape here:
***********
....But I did talk to one of my coworkers and got a three hour introductory lecture on Lie Groups / Algebras in various contexts, and I think going down the differential geometry route would be productive (and it allows us to keep the representation theory in the representation theory thread!)... I think we are almost at the point where we can derive Maxwellean Electrodynamics as a U(1) gauge theory (which will motivate some differential geometry notions in the process), but I wanted to work out most of the details before introducing that.

Anyways, my coworker did suggest some things to do in the meanwhile; we should finish deriving the Lie algebras for the other standard Lie groups, such as SU(2), SL(n; C), SO(3, 1)... so I assign that as a homework problem for you guys to do in this thread!
***********
the suggestion is----discuss SO(3,1) and so(3,1). Then back to Hurkyl for an idea about the next step. Lets go with that.

 Originally posted by chroot, changed to be about SO(3,1) [size=large]Lie algebra of SO(3,1)[/size] SO(3,1) is just the group of Special Relativity that gets you to the moving observer's coordinates---it contains 4x4 real matrices that preserve a special "metric" dx2 + dy2 + dz2 - dt2 to keep the space and time units the same, distance is measured in light-seconds----or anyway time and distance units are made compatible so that c = 1 and I dont have to write ct everywhere and can just write t. This "metric" is great because light-like vectors have norm zero. So the definition that a matrix in this group takes any vector to one of the same norm means that light-like stays light-like! All observers, even those in relative motion, agree about what is light-like---the world line of something going that speed. (Another way of saying the grandfather axiom of SR that all agree about the speed of light.) the (3,1) indicates the 3 plus signs followed by the 1 minus sign in the "metric". So we implement the grand old axiom of SR by having this special INNER PRODUCT* in our 4D vector 1) Multiplication by SO(3,1) matrices preserves the special inner product: = = [sum]*i xiyi, where A is any member of SO(3,1), x and y are any real 4D vectors, and * is a reminder that the last term in the sum gets a minus sign. 2) This asterisk notation is a bit clumsy and what Brian Hall does instead is define a matrix g which is diag(1,1,1,-1). g looks like the 4 x 4 identity except for one minus sign BTW notice that g-1 = g and also that gt = g and he expresses the condition 1) by saying At g A = g [[[[to think about.....express as xtg y express as xt At g A y]]]] 3) Then he manipulates 2) to give g-1 At g = A-1 .....multiply both sides of 2) on the left by g-1 to give g-1 At g A= I then multiply both sides on the right by A-1..... 4) then---ahhhh! the exponential map at last----he writes 3) using a matrix A = expX, and solves for a condition on X g-1 At g = g-1 exp(Xt) g = exp(g-1 Xt g ) = exp(- X) = A-1 the only way this will happen is if X satisfies the condition g-1 Xt g = -X it is something like what we saw before with SO(n) except gussied up with g, so it is not a plain transpose or a simple skew symmetric condition. also the condition is the same as g Xt g = -X because g is equal to its inverse. Better post this and proofread later. .
BTW multiplying by g on right and left like that does not change trace, so as an additional check

trace(X) = trace(g Xt g) = trace( -X) = - trace (X)

showing that trace (X) = 0

so now we know what matrices comprise so(3,1)

they are the ones that satisfy

g Xt g = -X
 Not sure how relevant this is to where the thread is going, but I didn’t want people to think I’d given up on it. The Heisenberg Group The set of all upper triangular 3x3 matrices with determinant 1 coupled with matrix multiplication forms a group known as the Heisenberg Group, which will be denoted H. The matrices A in H are of the form Code: (1 a b) (0 1 c) (0 0 1) where a,b,c are real numbers. If A is in the form above, the inverse of A can be computed directly to be Code: (1 -a ac-b) (0 1 -c ) (0 0 1 ) H is thus a subgroup of GL(3:R) The limit of all matrices in the form of A is again in the form of A. (This bit wasn’t as clear to me as the text indicated. Can someone help?) The Lie Algebra of the Heisenberg Group Consider a matrix X such that X is of the form Code: (0 d e) (0 0 f) (0 0 0) then exp(X) is a member of H. If W is any matrix such that exp(tW) is of the form of matrix A, then all of the entries of W=d(exp(tW))/dt at t=0 which are on or below the diagonal must be 0, so W is of the form X. Apologies for the possible lack of clarity. I kinda rushed it.
 Recognitions: Gold Member Science Advisor Staff Emeritus I don't think I'll have time over the next week or so to prepare anything, so it'd be great if someone else can introduce something (or pose some questions) for a little while!

Recognitions:
Gold Member
 Originally posted by Hurkyl I don't think I'll have time over the next week or so to prepare anything, so it'd be great if someone else can introduce something (or pose some questions) for a little while!
Hey Warren, any ideas?
Maybe we should hunker down and wait till
Hurkyl gets back because he seemed to give the
thread some direction. But on the other hand
we dont want to depend on his initiative to the
point that it is a burden! What should we do?

I am thinking about the Lorentz group, or that thing SO(3,1)
I discussed briefly a few days ago.
Lonewolf is our only audience. (in part a fiction, but one must
imagine some listener or reader)
Maybe we should show him explicit forms of matrices implementing the Lorentz
and Poincare groups.

It could be messy but on the other hand these are so
basic to relal speciativity. Do we not owe it to ourselves
to investigate them?

Any particular interests or thoughts about what to do?

Recognitions:
Gold Member
If we were Trekies we might call it "the Spock algebra of the Klingon group" or if we were on firstname basis with Sophus Lie and Hendrik Lorentz we would be talking about
"the Sophus algebra of the Hendrik group"
such solemn name droppers... Cant avoid it.

Anyway I just did some scribbling and here it is. Pick any 6 numbers a,b,c,d,e, f
This is a generic matrix in the Lie algebra of SO(3;1):

Code:
0   a  b  c
-a  0  d  e
-b -d  0  f
c   e  f  0
what I did was take a line from preceding post (also copied below)
g-1 Xt g = -X

remember that g is a special diagonal matrix diag(1,1,1,-1)

and multiply on both sides by g to get
Xt g = -gX

that says that X transpose with ritemost colum negged
equals -1 times the original X with its bottom row negged.

This should be really easy to see so I want to make it that way.
Is this enough explanation for our reader? Probably it is.

But if not, let's look at the original X with its bottom row negged

Code:
0   a  b  c
-a  0  d  e
-b -d  0  f
-c -e  -f  0
And let's look at the transpose with its ritemost column negged

Code:
0  -a  -b  -c
a   0  -d  -e
b   d   0  -f
c   e   f   0
And just inspect to see if the first is -1 times the second.
It does seem to be the case.

Multiplying by g on left or right does things either to the
bottom row or the rightmost column, I should have said at the beginning---and otherwise doesnt change the matrix.

Ahah! I see that what I have just done is a homework problem in Brian hall's book. It is exercise #7 on page 51, "write out explicitly the general form of a 4x4 real matrix in so(3;1)

 Originally a chroot post but changed to be about SO(3;1) [size=large]Lie algebra of SO(3;1)[/size] SO(3;1) is just the group of Special Relativity that gets you to the moving observer's coordinates---it contains 4x4 real matrices that preserve a special "metric" dx2 + dy2 + dz2 - dt2 to keep the space and time units the same, distance is measured in light-seconds----or anyway time and distance units are made compatible so that c = 1 and I dont have to write ct everywhere and can just write t. 1) Multiplication by SO(3;1) matrices preserves the special inner product: = = [sum]*i xiyi, where A is any member of SO(3,1), x and y are any real 4D vectors, and * is a reminder that the last term in the sum gets a minus sign. 2) This asterisk notation is a bit clumsy and what Brian Hall does instead is define a matrix g which is diag(1,1,1,-1). g looks like the 4 x 4 identity except for one minus sign BTW notice that g-1 = g and also that gt = g and he expresses the condition 1) by saying At g A = g 3) Then he manipulates 2) to give g-1 At g = A-1 4) then---ahhhh! the exponential map at last----he writes 3) using a matrix A = expX, and solves for a condition on X g-1 At g = g-1 exp(Xt) g = exp(g-1 Xt g ) = exp(- X) = A-1 the only way this will happen is if X satisfies the condition g-1 Xt g = -X it is something like what we saw before with SO(n) except gussied up with g, so it is not a plain transpose or a simple skew symmetric condition. also the condition is the same as g Xt g = -X
 Recognitions: Gold Member Science Advisor Staff Emeritus I've been trying to devise a good way to introduce differential manifolds... (by that I mean that I hate the definition to which I was introduced and I was looking for something that made more intuitive sense!) I think I have a way to go about it, but it dawned on me that I might be spending a lot of effort over nothing, I should have asked if everyone invovled is comfortable with terms like "differentiable manifold" and "tangent bundle".

Recognitions:
Gold Member
 Originally posted by Hurkyl I've been trying to devise a good way to introduce differential manifolds... (by that I mean that I hate the definition to which I was introduced and I was looking for something that made more intuitive sense!) I think I have a way to go about it, but it dawned on me that I might be spending a lot of effort over nothing, I should have asked if everyone invovled is comfortable with terms like "differentiable manifold" and "tangent bundle".
I like Marsden's chapter 4 very much
"Manifolds, Vector Fields, and Differential Forms"
pp 121-145 in his book----25 pages
His chapter 9 covers Lie groups and algebras, not too
differently from Brian Hall that we have been using.
So Marsden is describing only the essentials.
I will get the link so you can see if you like it.

Lonewolf and I started reading Marsden's chapter 9 before
we realized Brian Hall was even better. So at least two of us
have some acquaintance with the Marsden book.

We could just ask if anybody had any questions about
Marsden chapter 4----those 25 pages----and if not simply
move on.

On the other hand if you have thought up a better way
to present differential geometry and want listeners, go for it!
Here is the url for Marsden.

http://www.cds.caltech.edu/~marsden/bib_src/ms/Book/
 Recognitions: Gold Member Science Advisor H., I had another look at Marsden. His chapter 9 is too hard and the book as a whole is too hard. It is a graduate textbook. But maybe his short chapter 4 on manifolds, vector fields and differential forms is not too hard. a short basic summary. It seems to me OK. If you agree then perhaps this is a solution. We dont have to give the definitions because they are all summarized for us. We should proceed only where it will give us pleasure, and at our own pace, being under no obligation to anyone. If Lonewolf is still around we can provide whatever explanations he asks for so he can keep up with the party. If we decide it is time to stop we will stop (substantial ground has already been covered). I shall be happy with whatever you decide. I am interested to know if there are any matrix group, lie group, lie algebra, repr. theory topics that you would like to hit. E.g. sections or chapters of Brian Hall (or propose some other online text). I am currently struggling to understand a little about spin foams but can find no direct connection there to this thread. Baez has a introductory paper gr-qc/9905087
 Recognitions: Gold Member Science Advisor Staff Emeritus I've been thinking more about my idea of trying to derive Maxwell's equations from the geometry of M4*U(1) (M4=Minowski space)... the way the idea was presented to me, I got the impression it would be an interesting application of lie groups requiring just a minimal amount of differential geometry... but as I've been mulling over what we'd have to do to get there I'm thinking it might actually be an interesting application of differential geometry requiring just a minimal amount of lie groups. [:(] So basically, I don't know where to go from here! The way I usually like to learn is to delve a little bit into a subject, then figure out a (possibly almost trivial) concrete example of how the subject can be used to describe "real world" things, and then continue studying deeper into the subject. The problem is I just don't know what "real world" thing we can get to early on. I guess the solution is to just delve deeper into the math before looking back at the real world.