two good things just happened.
Lonewolf who is new to groups (background = one course in linear algebra) tackled it and proved it down-in-the-mud
and then Hurkyl proved it elegantly as a special
case of a more general fact that would include
the complex case of skew-Hermitian where you
take transpose and then complex conjugate of the matrix entries
can not restrain a broad grin
because both the dirtyhands approach and the elegant one
are indispensible
great
Originally posted by Hurkyl For A and B skew symmetric matrices:
(AB - BA)t = (AB)t - (BA)t
= BtAt - AtBt
= (-B)(-A) - (-A)(-B)
= BA - AB
= -(AB - BA)
So the commutator of any two skew symmetric matrices is again skew symmetric.
you know, this thread is turning into a pretty nice lie group/lie algebra thread. there is the differential forms thread. now all we need if for someone to start a representation theory thread, and we ll have all the maths we need to do modern particle physics.
I would absolutely love a rep theory thread -- especially if we could include both the down-n-dirty and the high-level approaches. I'm resonably competent to talk about Lie groups, but I am lost on representations.
- Warren
#56
lethe
645
0
Originally posted by chroot I would absolutely love a rep theory thread -- especially if we could include both the down-n-dirty and the high-level approaches. I'm resonably competent to talk about Lie groups, but I am lost on representations.
- Warren
i m down for the high level part.
#57
Lonewolf
329
1
Sure, I'll have a go at representation theory. Even if I don't understand it all, I'm sure I'll get something out of it.
#58
lethe
645
0
Originally posted by Lonewolf Sure, I'll have a go at representation theory. Even if I don't understand it all, I'm sure I'll get something out of it.
lonewolf-
how much maths do you know? i don t think representation theory is all that hard. hang in there, i m sure we can get through it.
#59
Lonewolf
329
1
I've covered the basics of group theory, and completed a course in linear algebra to be concluded next academic year. I'm pretty comfortable with the prerequisites you listed in the other thread. I'm willing to learn and I've got four months to fill, so I'm prepared to put some time in.
#60
lethe
645
0
Originally posted by Lonewolf I've covered the basics of group theory, and completed a course in linear algebra to be concluded next academic year. I'm pretty comfortable with the prerequisites you listed in the other thread. I'm willing to learn and I've got four months to fill, so I'm prepared to put some time in.
I see several people are interested in group representations
and I'm thinking maybe we can just follow our interests.
I don't remember being part of an online study group and
dont have much idea of what works and what doesnt.
I propose chroot to be our nominal emcee or leader if we need one. But I don't care if we have a leader or are complete anarchy. And if somebody else is leader that is fine too.
Lonewolf defines the prerequisites, as I see it----one course in linear algebra and some time and willingness to work.
Why don't we see if we can get to some target in, say, the representation of some classic Lie group.
Maybe we will run out of gas halfway, but anyway we will have a destination.
What say this for a target-----classify the irreducible representations of SU(2). Can we get there from scratch?
Start with basic definitions and try to touch all the essential bases on the way?
I mention it because that target is highly visible. Maybe Hurkyl, or Chroot of Lethe can suggest a more practical goal.[
Having some goal will determine for us what things we have to cover, so we won't have to decide anything.
It might not matter what order we do things either.
Lethe for example could probably say right now what all the irred. reps of SU(2) are (up to isomorphism)
oops have to go
QUOTE]Originally posted by Lonewolf I've covered the basics of group theory, and completed a course in linear algebra to be concluded next academic year. I'm pretty comfortable with the prerequisites you listed in the other thread. I'm willing to learn and I've got four months to fill, so I'm prepared to put some time in. [/QUOTE]
#62
lethe
645
0
Originally posted by marcus
What say this for a target-----classify the irreducible representations of SU(2). Can we get there from scratch?
Start with basic definitions and try to touch all the essential bases on the way?
I mention it because that target is highly visible. Maybe Hurkyl, or Chroot of Lethe can suggest a more practical goal.[
Having some goal will determine for us what things we have to cover, so we won't have to decide anything.
It might not matter what order we do things either.
Lethe for example could probably say right now what all the irred. reps of SU(2) are (up to isomorphism)
a slightly more ambitious goal, that i would like to suggest, is the poincaré group SL(2,C)/Z2. it includes the rotation group as a subgroup (and thus includes all the concepts of SU(2), which would probably be a very good starting place), but it has a less trivial algebra, it is noncompact, so we can address those issues, and not simply connected, so we can also address those issues.
perhaps this is too ambitious. at any rate, SU(2) is a good starting point, and if that ends up being where we finish too, so be it.
I too think going for the representations of SU(2) and SO(3) would be a good first goal, if only because of the importance of those groups in physics. In any case, that's the first goal I had set myself after I read that LQG primer.
Originally posted by Hurkyl I too think going for the representations of SU(2) and SO(3) would be a good first goal, if only because of the importance of those groups in physics. In any case, that's the first goal I had set myself after I read that LQG primer.
Two online books have been mentioned.
Hurkyl I believe you indicated you were using Brian Hall
("An Elementary Introduction to Groups and Reps.")
That is 128 pages and focuses on matrix groups so it works
with a lot of concrete relevant examples. I really like it.
Earlier I was talking about Marsden's Chapter 9, and Lonewolf
extracted some stuff from that source and posted his notes,
I essentially did likewise with another patch of Marsden.
It would be helpful if we all had one online textbook to focus on.
I now think Brian Hall (your preference) is better adapted to people's interests and that maybe I goofed when I suggested Marsden.
I regret possibly causing people to waste time and printer paper printing off that long Chapter 9. I'm personally glad to have it for reference though, not the end of the world. But Brian Hall on balance seems better.
Lets see what theorems he needs to get the representations of SU(2). I mean---work backwards and figure out a route.
Brian hall's chapter 3, especially pp 27-37, seem to me to be grand central station.
chapter 3 is "Lie algebras and the exponential mapping"
He shows how to find the *logarithm* of a matrix
and he proves the useful formula
det exp(A) = exp( trace(A) )
and he proves the "Lie product formula"
and I can honestly say to Lonewolf that there is nothing scary here----nothing (that I can see with my admittedly foggy vision) that is fundamentally hard
(except at one point he uses the Jordan canonical form of a matrix----the fact that you can put it in a specially nice upper triangular form---which is a bit tedious to prove so nobody ever does they just invoke it. just one small snag or catch which we need not belabor)
It seems to me that to get where we want to go the main "base camp" destination is to show Lonewolf (our only novice and thus the most important person in a curious sense) the logarithm map that gets you from the group up into its tangent space (the algebra)
and the exponential map that gets you back down from the tangent space to the group
these are essentially the facts Brian Hall summarizes in the first 10 pages or so of Chapter 3 and then he gives a whole bunch of nice concrete examples illustrating it----pages 37-39.
Have to say, if where we want to go is the representations of SU(2) that we can certainly take a peek at the destination
and it is lovely
just was glancing at Brian Hall's page 71
this being about five pages or so into his chapter 5 "Basic Representation Theory"
So simple!
SU(2) is just a nice kind of 2 x 2 matrices of complex numbers! We always knew that, but suddenly he does the obvious thing and uses a matrix U, or (just a slight variation on the idea) its inverse U-1 to STIR UP polynomials in two complex variables!
We have to be talking to an imaginary novice to define the level of explanation and for better or worse Lonewolf is standing in for that novice. I think this polynomial idea will make sense to him!
If you have a polynomial in two variables z1 and z2,
then you can, before plugging z1 and z2 into the polynomial,
operate on them with a 2 x 2 matrix!
This gives a new polynomial in effect. It is a sweet innocent obvious idea. Why not do this and get new polynomials?
And indeed the polynomials of any given combined degree in two variables are a vector space. So there we already have our group busily working upon some vectorspace and stirring the vectors around.
And to make everything as simple as possible we will consider only homogeneous polynomials of degree m
meaning that in each term the power z1 is raised to and the power z2 is raised to---those two powers add up to m.
It is a "uniformity" condition on the polynomial, all its terms have the same combined degree.
this must be the world's easiest way to come up with an action of SU(2) on an m+1 dimensional vectorspace. Must go back to the days of Kaiser Wilhelm.
a basis of our vectorspace Vm can consist of m+1 singletons like
(z1)2(z2)m-2
the coefficients can be complex numbers, it is a vector space over the complex numbers which may be somewhat less familiar than over the reals but still no big deal.
The official (possibly imaginary) novice may be wondering "what does irreducible mean". Indeed i hope Lonewolf is around and wondering this because we really need someone to explain to.
Well there is a group
and a mapping of the group into the linear operators on a vector space (some method for the group to act on vectors, like this scheme of using matrices to stir up polynomials)
that is called a representation (speaking unrigorously)
and it is irreducible if there is no part of the vectorspace left unstirred.
no subspace of V which is left invariant by the group.
no redundant part of V which doesn't get moved somewhere by at least one element of the group.
if there were an invariant subspace you could factor it out and
so-to-speak "reduce" the representation to a lower dimensional one.
so that's what irreducible means
it looks like these polynomials get pretty thoroughly churned around by preprocessing z1 and z2 with a matrix, but to be quite correct we need to check that they really are and that there is no invariant subspace.
******footnote*****
I think I said this before but just to be fully explicit about the action of the group:
If P(z1,z2) is the old polynomial, then the matrix U acts on it to produce a new polynomial by taking U-1 and acting on (z1, z2) to produce a new pair of complex numbers
(w1, w2) = U-1 (z1,z2)
and then evaluate the polynomial with (w1, w2):
P(U-1 (z1,z2) )
*****************
hope its not unwise to take a peek at the destination
first before trying to see how to get there
especially hope to get comments from Lethe Chroot Hurkyl
on how this should go, which theorems to hit, whether to have an orderly or random progression, whether Brian Hall gives a good focus etc.
Originally posted by lethe a slightly more ambitious goal, that i would like to suggest, is the poincaré group SL(2,C)/Z2. it includes the rotation group as a subgroup (and thus includes all the concepts of SU(2), which would probably be a very good starting place), but it has a less trivial algebra, it is noncompact, so we can address those issues, and not simply connected, so we can also address those issues.
perhaps this is too ambitious. at any rate, SU(2) is a good starting point, and if that ends up being where we finish too, so be it.
first off, I would love it if you would do a whole bunch of explanation and get us started moving.
I tend to talk to much so I have to shut up and wait.
But I don't want this thread to get cold!
second. I totally agree. SU(2) and SO(3) are good initial targets but if it turns out to be fun to get to them then it would be
great to go on past to Poincare
I am counting (hoping) on you (plural) to explain the exponential map that connects the L.algebra to the L.group, because that seems to be crucial to everything including describing the reps
Hey Lonewolf, is there anything you need explained.
I wish Chroot or Lethe, both of whom could take over,
would take over and move this ahead.
I tend to talk too much and would like to be quiet for a while.
It is a good thread. It should do something.
What are you up to mathwise now its summer vacation?
#68
Lonewolf
329
1
Please don't slow down the threads on my behalf. I'll be around, just nodding and smiling in the background.
#69
Lonewolf
329
1
Explaining? Only the exponential map. I can't seem to see how it relates to what it's supposed to...maybe that gets explained further along in the text than I am, or I'm just missing the point.
Originally posted by Lonewolf Please don't slow down the threads on my behalf. I'll be around, just nodding and smiling in the background.
OK I must have said something wrong and derailed the thread.
I have this fundamental fixed opinion that in any explanation the most important person is the novice and I cannot imagine having a explanation party about groups or lie algebras or anything else without one person who freely confesses to not knowing the subject.
Then you focus with one eye on the target (the theorems you want to get to) and with one eye on the novice
and you try to get the novice to the target destination
and the novice is also partly imaginary----the real one may get bored and go away meanwhiles.
but anyway that is how I imagine it. I can't picture doing groups with just Lethe and Chroot because they both already KNOW groups. Chroot is a tech Stanford student almost to his degree. Lethe is also clearly very capable and knowledgeable.
Dont sit in the background nodding for heavens sake. ASK these people to explain something to you. Well that is how I picture things and that is my advice. But who knows, it may all work out differently.
this is a great fact:
det( exp A) = exp (trace A)
do you know what det is and what trace is and do you know
what the exponential ex map is? I sort of assume so.
But if not then ask those guys and make them work it will be good for their mathematical souls.
#71
Lonewolf
329
1
Could you elaborate on what you mean by
it is irreducible if there is no part of the vectorspace left unstirred.
Originally posted by Lonewolf Explaining? Only the exponential map. I can't seem to see how it relates to what it's supposed to...maybe that gets explained further along in the text than I am, or I'm just missing the point.
You have had a mathcourse where they said
exp(t) = 1 + t + t2/2! + ...(you can continue this)
If not you will be hurled from a high cliff.
Suppose instead of 1 one puts the n x n identity matrix
and instead of t one puts some n x n matrix A.
At some time in our history someone had this fiendishly clever idea, put a matrix into the series in place of a number. It will converge and give a matrix.
But here is an easy question for YOU Lonewolf.
What if A is a diagonal matrix with say 1/2 all the way down the diagonal
then what is exp (A)?
Dont be reluctant to ask things. Dont wait for it to be "covered later". Any of us may fail to give a coherent answer but ask.
But now I am asking you, can you calculate that nxn, well to be specific call it 3x3, matrix exp(A). Can you write it down.
What is the trace of A
What is the determinant of exp A
If I am poking at you a little it is because I am in the dark about what you know and don't know.
We're supposed to think of a Lie Group as a group of transformations with various properties. One of the more interesting properties is that we can form "one-parameter families" that have the property that:
T0 x = x
Ts Tt x = Ts+t x
We can think of the parameter as being the "size" of the transformation. An example will probably make this clear.
Consider R2, and let Tθ be rotations around the origin through an angle of θ. Then, T0 is the identity transformation, and Tθ Tφ x = Tθ+φ x, so rotations form a one-parameter family when parametrized by the angle of rotation.
Since we have this continuous structure, it's natural to extend the ideas of calculus to Lie Groups. So, what if we consider an infinitessimal transformation Tdt in a one-parameter family?
Let's do an example using rotations in R2. Applying rotation Tθ can be expressed by premultiplying by the matrix:
Code:
/ cos θ -sin θ \
\ sin θ cos θ /
So what if we plug in an infinitessimal parameter? We get
So the infintessimal rotations are simply infinitessimal translations. This is true in general; we can make locally linear approximations to transformations just like ordinary real functions, such as:
f(x + dx) = f(x) + f'(x) dx
We call the algebra of infinitessimal transformations a Lie Algebra.
The interesting question is how to go the other way. What if we had the matrix
Code:
/ 0 -1 \
\ 1 0 /
and we wanted to go the other way to discover this is the derivative of a family of transformations?
Well, integration won't work, so let's take a different approach; let's repeatedly apply our linear approximation. If X is our element from the lie algebra, then (1 + t X) is approximately the transformation we seek Tt. We can improve our approximation by applying the approximation twice, but each time half as long:
(1 + (t/2) X)2
And in general we can break it up into n legs:
(1 + (t/n) X)n
So then we might suppose that:
Tt = limn->∞ (1 + (tX/n))n
And just like in the ordinary case, this limit evaluates to:
Tt = et X
That's from where the exponential map comes!
You can then verify that the derivitive of Tt at 0 is indeed t X
To summarize, we exponentiate elements of the Lie Algebra (iow apply an infinitessimal transformation an infinite number of times) to yield an elements of the Lie Group.
my browser draws a blank sometimes and shows boxes so I am
experimenting with typography a bit here. Nice post.
I don't seem able to get the theta to show up inside a "code" area. All I get is a box.
Well that is all right. I can read the box as a theta OK
Strange that theta shows up outside "code" area but not
inside
That is a nice from-first-principles way to introduce the
exponential of matrices.
Can you show
det exp(A) = exp (trace A)
in a similarly down-to-earth way?
I see it easily for diagonal matrices but when I thought about it I had to imagine putting the matrix in a triangular form
Lonewolf your job is to react when people explain something in a way you can understand. stamp feet. make hubub of some kind
You are about to see an example of a Lie algebra.
Hurkyl is about to show you what the L.A. is that belongs to the group of DET = 1 matrices for example SL(2, C).
The L.A. for SL(2,C) is written with lowercase as sl(2, C)
The L.G. of matrices with det = 1 is made by exponential map exp(A) from TRACE ZERO matrices A.
because exp(0) = 1.
So if Hurkyl takes one more step he can characterize the L.A.
of the group of det = 1 matrices.
Actually of any size and over the reals as well as the complexes I think. But just to be specific think of 2x2 matrices.
Lonewolf, do you understand this. Do you like it. I think it is terrific, like sailing on a windy day. L.G. and L.A. are really neat.
Well probably it is 4 AM in the morning in the UK so you cannot answer.
Last edited:
#76
Lonewolf
329
1
If not you will be hurled from a high cliff.
I guess you don't have to bother coming over here and finding a high cliff then.
Well that is all right. I can read the box as a theta OK
Strange that theta shows up outside "code" area but not
inside
You're having font issues then. Your default font does indeed have the theta symbol, but the font your browser uses for the code blocks does not have a theta symbol (and replaces it with a box).
This is pretty much when the penny dropped.
Eep! I've never heard that phrase before, is that good or bad?
Can you show
det exp(A) = exp (trace A)
in a similarly down-to-earth way?
Nope. The only ways I know to show it are to diagonalize or to use the same limit approximation as above and the approximation:
det(I + A dt) = 1 + tr(A) dt
which you can verify by noting that all of the off diagonal entries are nearly zero, so the only important contribution is the product of the diagonal entries.
I think there's a really slick "down-to-earth" proof as well. I know the determinant is a measure of how much a transformation scales hypervolumes. (e.g. if the determinant of a 2x2 matrix near a point is 4, then applying the matrix will multiply the areas of figures near that point by 4) I know there's a nice geometrical interpretation of the trace, but I don't remember what it is.
Originally posted by Hurkyl Nope. The only ways I know to show it are to diagonalize or to use the same limit approximation as above and the approximation:
det(I + A dt) = 1 + tr(A) dt
which you can verify by noting that all of the off diagonal entries are nearly zero, so the only important contribution is the product of the diagonal entries.
All that shows is that the formula holds to good approximation for matrices with elements that are all much less than one.
One correct proof goes as follows:
For any matrix A, there is always a matrix C such that CAC-1 is upper triangular meaning that all elements below the diagonal vanish. The key properties needed for the proof are that the space of upper triangular matrices are closed under matrix multiplication, and their determinants are the product of the elements on their diagonals. The only other thing we use is the invariance of the trace under cyclic permutations of it's arguments so that Tr(CAC-1) = TrA. The proof follows trivially.
another proof if you know some topology: diagonalizable matrices are dense in GL(n).
#84
Lonewolf
329
1
Eep! I've never heard that phrase before, is that good or bad?
It's a good thing. We use it over here to mean the point where somebody realizes something. Sorry about that, I thought it was in wider use than it is.
Originally posted by Lonewolf It's a good thing. We use it over here to mean the point where somebody realizes something. Sorry about that, I thought it was in wider use than it is.
I always assumed it was like the coin dropping in a payphone.
Maybe going back to old times when cooking gas was metered
out by coin-operated devices---the penny had to drop for something to turn on.
I have lost track of this thread so much has happened.
Just to review something:
A skewsymmetric means AT = - A
and a skewsymmetric matrix must be zero down the diagonal
so its trace is clearly zero, and another definition:
B orthogonal means BT = B-1
Can you prove that if
A is a skew symmetric matrix then exp(A) is orthogonal and
has det = 1?
I assume you can. It characterizes the Lie algebra "so(3)" that goes with the group SO(3). You may have noticed that they use lowercase "what(...)" to stand for the Lie algebra that goes with the Lie group "WHAT(...)"
Excuse if this is a repeat of something I or someone else said earlier.
SO(3) is defined to be the space of all 3x3 real matrices G such that:
Gt = G-1
det G = 1
So what about its corresponding Lie Algebra so(3)? It is the set of all 3x3 matrices A such that exp(A) is in SO(3).
So how do the constraints on SO(3) translate to constraints on so(3)?
The second condition is easy. If A is in so(3), then:
exp(tr A) = det exp(A) = 1
so tr A must be zero. Conversely, for any matrix A with tr A zero, the second condition will be satisfied.
The first one is conceptually just as simple, but technically trickier. Translated into so(3) it requires:
exp(A)t = exp(A)-1
exp(At) = exp(-A)
*** this step to be explained ***
At = -A
Therefore if A is in so(3) then A must be skew symmetric. And conversely, it is easy to go the other way to see that any skew symmetric matrix A satisfies the first condition.
Therefore, so(3) is precisely the set of 3x3 traceless skew symmetric matrices.
I skipped over a technical detail in the short proof above. If exponents are real numbers then the marked step is easy to justify by taking the logarithm of both sides... however logarithms are only so nice when we're working with real numbers! I left that step in my reasoning because you need it when working backwards.
The way to prove it going forwards is to consider:
exp(s At) = exp(-s A)
If A is in so(3), then this must be true for every s, because so(3) forms a real vector space. Now, we differentiate with respect to s to yield:
(At) exp(s At) = (-A) exp(-s A)
Which again must be true for all s. Now, plug in s = 0 to yield:
At = -A
This trick is a handy replacement for taking logarithms!
Anyways, we've proven now that so(3) is precisely all 3x3 real traceless skew symmetric matrices. In fact, we can drop "traceless" because real skew symmetric matrices must be traceless.
For matrix algebras we usually define the lie bracket as being the commutator:
[A, B] = AB - BA
I will now do something interesting (to me, anyways); I will prove that so(3) is isomorphic (as a Lie Algebra) to R3 where the lie bracket is the vector cross product!
The first thing to do is find a (vector space) basis for so(3) over R. The most general 3x3 skew symmetric matrix is:
Code:
/ 0 a -b \
| -a 0 c |
\ b -c 0 /
Where a, b, and c are any real number. This leads to a natural choice of basis:
As an exercise for the reader, you can compute that:
AB - BA = C
BC - CB = A
CA - AC = B
So now I propose the following isomorphism &phi from so(3) to R3:
φ(A) = i
φ(B) = j
φ(C) = k
And this, of course, extends by linearity:
φ(aA + bB + cC) = ai + bj + ck
So now let's verify that this is actually an isomorphism:
First, the vector space structure is preserved; &phi is a linear map, and it takes a basis of the three dimensional real vector space so(3) onto a basis of the three dimensional real vector space R3, so φ must be a vector space isomorphism.
The only remaining thing to consider is whether &phi preserves lie brackets. We can do so by considering the action on all pairs of basis elements (since the lie bracket is bilinear)
φ([A, A]) = &phi(AA - AA) = φ(0) = 0 = i * i = [i, i] = [φ(A), φ(A)]
(and similarly for [B, B] and [C, C])
φ([A, B]) = φ(AB - BA) = φ(C) = k = i * j = [i, j] = [φ(A), φ(B)]
(and similarly for other mixed pairs)
So we have verified that so(3) and (R3, *) are isomorphic as Lie Algebras! If we so desired, we could then choose (R3, *) as the Lie Algebra associated with SO(3), and define the exponentional map as:
I'm not entirely sure where to go from here, though, since I'm learning it with the rest of you! (so if any of you have things to post, or suggestions on which way we should be studing, feel free to say something! ) But I did talk to one of my coworkers and got a three hour introductory lecture on Lie Groups / Algebras in various contexts, and I think going down the differential geometry route would be productive (and it allows us to keep the representation theory in the representation theory thread!)... I think we are almost at the point where we can derive Maxwellean Electrodynamics as a U(1) gauge theory (which will motivate some differential geometry notions in the process), but I wanted to work out most of the details before introducing that.
Anyways, my coworker did suggest some things to do in the meanwhile; we should finish deriving the Lie algebras for the other standard Lie groups, such as su(2), sl(n; C), so(3, 1)... so I assign that as a homework problem for you guys to do in this thread!
#90
Lonewolf
329
1
I think we are almost at the point where we can derive Maxwellean Electrodynamics as a U(1) gauge theory
More talking with him indicates he may have been simplifying quite a bit when he brought up Maxwell EM. I'll let someone else explain what "gauge theory" means in general; I'm presuming I'll understand the ramifications after I work through the EM exercise, but I haven't done that yet.
Just to help motivate the thread, I'll find su(n).
[size=large]Lie algebra of U(n)[/size]
First, as a reminder, we know that U(n) is the unitary group of n x n matrices. You should program the word 'unitary' into your head so it reminds you of these conditions:
1) Multiplication by unitary matrices preserves the complex inner product: <Ax, Ay> = <x, y> = [sum]i xi* yi, where A is any member of U(n), x and y are any complex vectors, and * connotes complex conjugation.
2) A* = A-1
3) A* A = I
4) |det A| = 1
Now, to find u(n), the Lie algebra of the Lie group U(n), I'm going to follow Brian Hall's work on page 43 of http://arxiv.org/math-ph/0005032
Recall that we can represent any1 member of a matrix Lie group G by an exponentiation of a member of its Lie algebra g. In other words, for all U in U(n), there is a u in u(n) such that:
exp(tu) = U
where exp is the exponential mapping defined above. Thus exp(tu) is a member of U(n) when u is a member of u(n), and t is any real number.
Now, given that U* = U-1 for member of U(n), we can assert that
(exp(tu))* = (exp(tu))-1
Both sides of this equation can be simplified. The left side's conjugation operator can be shown to "fall through" the exponential, and the left side is equivalent to exp(tu*). Similarly, the -1 on the right side falls through, and the right side is equivalent to exp(-tu). (Exercise: it's easy and educational to show that the * and -1 work this way.) We thus have a simple relation:
exp(tu*) = exp(-tu)
As Hall says, if you differentiate this expression with respect to t at t=0, you immediately arrive at the conclusion that
u* = -u
Matrices which have this quality are called "anti-Hermitian." (the "anti" comes from the minus sign.) The set of n x n matrices {u} such that u* = -u is the Lie algebra of U(n).
Now how about su(n)?
[size=large]Lie algebra of SU(n)[/size]
SU(n) is a subgroup of U(n) such that all its members have determinant 1. How does this affect the Lie algebra su(n)?
We only need to invoke one fact, which has been proven above. The fact is:
det(exp(X)) = exp(trace(X))
If X is a member of a Lie algebra, exp(X) is a member of the corresponding Lie group. The determinant of the group member must be the same as e raised to the trace of the Lie algebra member.
In this case, we know that all of the members of SU(n) have det 1, which means that exp(trace(X)) must be 1, which means trace(X) must be zero!
You can probably see now how su(n) must be. Like u(n), su(n) is the set of n x n anti-Hermitian matrices -- but with one additional stipulation: members of su(n) are also traceless.
1You can't represent all group members this way in some groups, as has been pointed out -- but it's true for all the groups studied here.
- Warren
edit: A few very amateurish mistakes. Thanks, lethe, for your help.
The weather's been pretty hot and chroot's derivation of su(n) is really neat and clear so I'm thinking I will just be shamelessly lazy and quote Warren with modifications to get sl(n, C).
I see that he goes along with Brian Hall and others in using lower case to stand for the Lie Algebra of a group written in upper case. So su(n) is the L.A. that belongs to SU(n).
In accord with that notation, sl(n,C) is the L.A. that goes with the group SL(n,C), which is just the n x n complex matrices with det = 1. Unless I am overlooking something, all I have to do is just a trivial change in what Warren already did:
Originally posted by chroot, with minor change for SL(n, C)
[size=large]Lie algebra of SL(n, C)[/size]
SL(n, C) is a subgroup of GL(n, C) such that all its members have determinant 1. How does this affect the Lie algebra sl(n, C)?
We only need to invoke one fact, which has been proven above. The fact is:
det(exp(X)) = exp(trace(X))
If X is a member of a Lie algebra, exp(X) is a member of the corresponding Lie group. The determinant of the group member must be the same as e raised to the trace of the Lie algebra member.
In this case, we know that all of the members of SL(n, C) have det 1, which means that exp(trace(X)) must be 1, which means trace(X) must be zero!
...sl(n, C) is the set of n x n complex matrices but with one additional stipulation: members of sl(n, C) are...traceless.
That didnt seem like any work at all. Even in this heat-wave.
Hurkyl said to give the L.A. of SO(3,1) so maybe i should do that to so as not to look like a slacker. Really like the clarity of both Hurkyl and Chroot style.
I guess Lethe must have raised the "topologically connected" issue. For a rough and ready treatment, I feel like glossing over manifolds and that but it is nice to picture how the det = 0 "surface" slices the GL group into two chunks...
Because "det = 0" matrices, being non-invertible, are not in the group!
...so that only those with det > 0 are in the "connected component of the identity". The one-dimensional subgroups generated by elements of the L.A. are like curves radiating from the identity and they cannot leap the "det = 0" chasm and reach the negative determinant chunk.
Now that I think of it, Lethe is here and he might step in and do SO(3,1) before I attend to it!
Hurkyl has a notion of where to go. I want to follow the hints taking shape here:
***********
...But I did talk to one of my coworkers and got a three hour introductory lecture on Lie Groups / Algebras in various contexts, and I think going down the differential geometry route would be productive (and it allows us to keep the representation theory in the representation theory thread!)... I think we are almost at the point where we can derive Maxwellean Electrodynamics as a U(1) gauge theory (which will motivate some differential geometry notions in the process), but I wanted to work out most of the details before introducing that.
Anyways, my coworker did suggest some things to do in the meanwhile; we should finish deriving the Lie algebras for the other standard Lie groups, such as SU(2), SL(n; C), SO(3, 1)... so I assign that as a homework problem for you guys to do in this thread!
***********
the suggestion is----discuss SO(3,1) and so(3,1). Then back to Hurkyl for an idea about the next step. Let's go with that.
Originally posted by chroot, changed to be about SO(3,1)
[size=large]Lie algebra of SO(3,1)[/size]
SO(3,1) is just the group of Special Relativity that gets you to the moving observer's coordinates---it contains 4x4 real matrices that preserve a special "metric" dx2 + dy2 + dz2 - dt2
to keep the space and time units the same, distance is measured in light-seconds----or anyway time and distance units are made compatible so that c = 1 and I don't have to write ct everywhere and can just write t.
This "metric" is great because light-like vectors have norm zero. So the definition that a matrix in this group takes any vector to one of the same norm means that light-like stays light-like!
All observers, even those in relative motion, agree about what is light-like---the world line of something going that speed. (Another way of saying the grandfather axiom of SR that all agree about the speed of light.)
the (3,1) indicates the 3 plus signs followed by the 1 minus sign in the "metric".
So we implement the grand old axiom of SR by having this special INNER PRODUCT* in our 4D vector
1) Multiplication by SO(3,1) matrices preserves the special inner product: <Ax, Ay> = <x, y> = [sum]*i xiyi, where A is any member of SO(3,1), x and y are any real 4D vectors, and * is a reminder that the last term in the sum gets a minus sign.
2) This asterisk notation is a bit clumsy and what Brian Hall does instead is define a matrix g which is diag(1,1,1,-1).
g looks like the 4 x 4 identity except for one minus sign
BTW notice that g-1 = g
and also that gt = g
and he expresses the condition 1) by saying
At g A = g
[[[[to think about...express <x,y> as xtg y
express <Ax, Ay> as xt At g A y]]]]
3) Then he manipulates 2) to give
g-1 At g = A-1
...multiply both sides of 2) on the left by g-1
to give
g-1 At g A= I
then multiply both sides on the right by A-1...
4) then---ahhhh! the exponential map at last----he writes 3) using a matrix A = expX, and solves for a condition on X
g-1 At g = g-1 exp(Xt) g = exp(g-1 Xt g ) = exp(- X) = A-1
the only way this will happen is if X satisfies the condition
g-1 Xt g = -X
it is something like what we saw before with SO(n) except gussied up with g, so it is not a plain transpose or a simple skew symmetric condition. also the condition is the same as
g Xt g = -X
because g is equal to its inverse.
Better post this and proofread later.
.
BTW multiplying by g on right and left like that does not change trace, so as an additional check
Not sure how relevant this is to where the thread is going, but I didn’t want people to think I’d given up on it.
The Heisenberg Group
The set of all upper triangular 3x3 matrices with determinant 1 coupled with matrix multiplication forms a group known as the Heisenberg Group, which will be denoted H. The matrices A in H are of the form
Code:
(1 a b)
(0 1 c)
(0 0 1)
where a,b,c are real numbers.
If A is in the form above, the inverse of A can be computed directly to be
Code:
(1 -a ac-b)
(0 1 -c )
(0 0 1 )
H is thus a subgroup of GL(3:R)
The limit of all matrices in the form of A is again in the form of A. (This bit wasn’t as clear to me as the text indicated. Can someone help?)
The Lie Algebra of the Heisenberg Group
Consider a matrix X such that X is of the form
Code:
(0 d e)
(0 0 f)
(0 0 0)
then exp(X) is a member of H.
If W is any matrix such that exp(tW) is of the form of matrix A, then all of the entries of W=d(exp(tW))/dt at t=0 which are on or below the diagonal must be 0, so W is of the form X.
Apologies for the possible lack of clarity. I kinda rushed it.
I don't think I'll have time over the next week or so to prepare anything, so it'd be great if someone else can introduce something (or pose some questions) for a little while!
Originally posted by Hurkyl I don't think I'll have time over the next week or so to prepare anything, so it'd be great if someone else can introduce something (or pose some questions) for a little while!
Hey Warren, any ideas?
Maybe we should hunker down and wait till
Hurkyl gets back because he seemed to give the
thread some direction. But on the other hand
we don't want to depend on his initiative to the
point that it is a burden! What should we do?
I am thinking about the Lorentz group, or that thing SO(3,1)
I discussed briefly a few days ago.
Lonewolf is our only audience. (in part a fiction, but one must
imagine some listener or reader)
Maybe we should show him explicit forms of matrices implementing the Lorentz
and Poincare groups.
It could be messy but on the other hand these are so
basic to relal speciativity. Do we not owe it to ourselves
to investigate them?
Any particular interests or thoughts about what to do?
If we were Trekies we might call it "the Spock algebra of the Klingon group" or if we were on firstname basis with Sophus Lie and Hendrik Lorentz we would be talking about
"the Sophus algebra of the Hendrik group"
such solemn name droppers... Cant avoid it.
Anyway I just did some scribbling and here it is. Pick any 6 numbers a,b,c,d,e, f
This is a generic matrix in the Lie algebra of SO(3;1):
Code:
0 a b c
-a 0 d e
-b -d 0 f
c e f 0
what I did was take a line from preceding post (also copied below)
g-1 Xt g = -X
remember that g is a special diagonal matrix diag(1,1,1,-1)
and multiply on both sides by g to get
Xt g = -gX
that says that X transpose with ritemost colum negged
equals -1 times the original X with its bottom row negged.
This should be really easy to see so I want to make it that way.
Is this enough explanation for our reader? Probably it is.
But if not, let's look at the original X with its bottom row negged
Code:
0 a b c
-a 0 d e
-b -d 0 f
-c -e -f 0
And let's look at the transpose with its ritemost column negged
Code:
0 -a -b -c
a 0 -d -e
b d 0 -f
c e f 0
And just inspect to see if the first is -1 times the second.
It does seem to be the case.
Multiplying by g on left or right does things either to the
bottom row or the rightmost column, I should have said at the beginning---and otherwise doesn't change the matrix.
Ahah! I see that what I have just done is a homework problem in Brian hall's book. It is exercise #7 on page 51, "write out explicitly the general form of a 4x4 real matrix in so(3;1)
Originally a chroot post but changed to be about SO(3;1)
[size=large]Lie algebra of SO(3;1)[/size]
SO(3;1) is just the group of Special Relativity that gets you to the moving observer's coordinates---it contains 4x4 real matrices that preserve a special "metric" dx2 + dy2 + dz2 - dt2
to keep the space and time units the same, distance is measured in light-seconds----or anyway time and distance units are made compatible so that c = 1 and I don't have to write ct everywhere and can just write t.
1) Multiplication by SO(3;1) matrices preserves the special inner product: <Ax, Ay> = <x, y> = [sum]*i xiyi, where A is any member of SO(3,1), x and y are any real 4D vectors, and * is a reminder that the last term in the sum gets a minus sign.
2) This asterisk notation is a bit clumsy and what Brian Hall does instead is define a matrix g which is diag(1,1,1,-1).
g looks like the 4 x 4 identity except for one minus sign
BTW notice that g-1 = g
and also that gt = g
and he expresses the condition 1) by saying
At g A = g
3) Then he manipulates 2) to give
g-1 At g = A-1
4) then---ahhhh! the exponential map at last----he writes 3) using a matrix A = expX, and solves for a condition on X
g-1 At g = g-1 exp(Xt) g = exp(g-1 Xt g ) = exp(- X) = A-1
the only way this will happen is if X satisfies the condition
g-1 Xt g = -X
it is something like what we saw before with SO(n) except gussied up with g, so it is not a plain transpose or a simple skew symmetric condition. also the condition is the same as
I've been trying to devise a good way to introduce differential manifolds...
(by that I mean that I hate the definition to which I was introduced and I was looking for something that made more intuitive sense!)
I think I have a way to go about it, but it dawned on me that I might be spending a lot of effort over nothing, I should have asked if everyone invovled is comfortable with terms like "differentiable manifold" and "tangent bundle".
Originally posted by Hurkyl I've been trying to devise a good way to introduce differential manifolds...
(by that I mean that I hate the definition to which I was introduced and I was looking for something that made more intuitive sense!)
I think I have a way to go about it, but it dawned on me that I might be spending a lot of effort over nothing, I should have asked if everyone invovled is comfortable with terms like "differentiable manifold" and "tangent bundle".
I like Marsden's chapter 4 very much
"Manifolds, Vector Fields, and Differential Forms"
pp 121-145 in his book----25 pages
His chapter 9 covers Lie groups and algebras, not too
differently from Brian Hall that we have been using.
So Marsden is describing only the essentials.
I will get the link so you can see if you like it.
Lonewolf and I started reading Marsden's chapter 9 before
we realized Brian Hall was even better. So at least two of us
have some acquaintance with the Marsden book.
We could just ask if anybody had any questions about
Marsden chapter 4----those 25 pages----and if not simply
move on.
On the other hand if you have thought up a better way
to present differential geometry and want listeners, go for it!
Here is the url for Marsden.