Group Theory Basics: Where Can I Learn More?

In summary, Group Theory is a branch of mathematics that studies the properties of groups and their operations. It has applications in many fields, such as physics, chemistry, and computer science. To learn more about Group Theory, one can refer to textbooks, online courses, and research papers. Additionally, universities or institutes may offer specialized courses on Group Theory. It is also beneficial to attend seminars, conferences, or workshops to gain a deeper understanding of this subject. Ultimately, practice and problem-solving are crucial to mastering Group Theory.
  • #141
Sorry about the lack of input, I'm busier than I expected I would be at work. I'm switching to a part time position in two weeks, so I'll be able to input more then. I need to work on this more than I have time for at the moment as a lot of it is completely new to me. I'll work on (2) of theorem 3.18 tonight, and I'll post if I get anywhere.
 
Physics news on Phys.org
  • #142
Φ^([X,Y]) = [Φ^(X),Φ^(Y)]

Using the fact that

[X,Y]=d(exp(tX)*Y*exp(-tX))/dt at t=0

we can define

Φ^([X,Y])=Φ^(d(exp(tX)*Y*exp(-tX))/dt) at t=0 = d(exp(tX)*Y*exp(-tX))/dt at t=0

since a derivative commutes with a linear transformation.

From (1) of theorem 3.18 we thus obtain

Φ^([X,Y]) = d(Φ(exp(tX)*Φ^Y*Φ(exp(-tX)))/dt at t=0 = d(exp(φ(X))*Φ^Y*exp(-φ(X))/dt, at t=0 = [Φ(X),Φ(Y)]

by our definition of [X,Y]

Hurkyl was right, this was a messy one. Sorry!
 
  • #143
Hello LW, no apologies! great you are still on hand despite summer job etc. I believe if OK with you I will edit your post to remove asterisks used as multiplications signs. We are over-using the asterisk round about now---Hurkyl and I have been using it to denote a map lifted up from the manifold or basic group level to the tangent space level. And you are using caret ^ for that! So although caret is a good thing to use, asterisk is not a good thing to use for multiplication. Tempest in a teapot and really no confusion, but I will edit your post to conform and hope you are not vexed by my taking the liberty:

Originally posted by Lonewolf
Φ^([X,Y]) = [Φ^(X),Φ^(Y)]

Using the fact that

[X,Y]=d(exp(tX)Yexp(-tX))/dt at t=0

<<<<fact easy to prove by product rule of differential calculus, I will prove it soon unless someone else does>>>>

we can define

&Phi;^([X,Y])=&Phi;^(d(exp(tX)Yexp(-tX))/dt) at t=0 = d&Phi;((exp(tX)Yexp(-tX))/dt at t=0

since a derivative commutes with a linear transformation.

From (1) of theorem 3.18 we thus obtain

&Phi;^([X,Y]) = d(&Phi;(exp(tX)&Phi;^Y&Phi;(exp(-tX)))/dt at t=0 = d(exp(t&Phi;^(X))&Phi;^Yexp(-t&Phi;^(X))/dt, at t=0 = [&Phi;^(X), &Phi;^(Y)]

by our definition of [X,Y]
...
 
Last edited:
  • #144
By slogging thru Hall's notation and variants of that we may
hope to eventually see how to deal elegantly with this whole business, anyway here suddenly the ordinary freshman calculus product rule appears like a lighthouse in a fog

Lonewolf says:<<Using the fact that

[X,Y]=d(exp(tX)Yexp(-tX))/dt at t=0 ...>>>

So group (exp(tX)Y) as your function f
and make exp(-tX)) your function g
and calculate d/dt of fg
(fg)' = f'g + f g'


And f' turns out to be XY (because multiplying by Y after you take the derivative is just a linear thing that doesn't disturb differentiation

And g, evaluated at t = 0 is just 1!

So f'g is just equal to XY


and how about f g' ?

Well f evaluated at t = 0 is just Y (because exp(0 X) is the identity.)

And g' equals (- X)

So f g' equals - YX

so (fg)' does in fact turn out to be XY - YX = [X, Y]

Its good time and weather for a barbecue right now so I am
taking off for a friend's house, be back later.
 
  • #145
I will edit your post to conform and hope you are not vexed by my taking the liberty:

Not at all. I just finished typing the post when I realized I'd unwittingly used an asterix for both multiplication and the map, so ^ seemed the best candidate for a replacement.
 
  • #146
Originally posted by Lonewolf
Not at all. I just finished typing the post when I realized I'd unwittingly used an asterix for both multiplication and the map, so ^ seemed the best candidate for a replacement.

Do you see anything that needs clearing up (with the theorem we just discussed or related matters) or shall we see where we can go from here?

there should be stuff we can derive from this "theorem 3.18" of Hall

it has a nice wide-gauge feel to it------saying that group morphisms lift up to tangentspace level and turn into algebra morphisms (linear maps that preserve bracket)

maybe I am not saying this altogether clearly but it seems as if we ought to be able to get some mileage out of the work we have done so far

but also, since we have jumped into this in a very rough and somewhat improvisational way we may have left gaps where you would like more discussion. if so please say
 
  • #147
What books are you guys using?
 
  • #148
"An Elementary Introduction to Groups and Representations" by Brian C. Hall, @ arXiv:math-ph/0005032 v1


A coworker recommeded I check out "A Comprehensive Introduction to Differential Geometry" volumes I and II by Mike Spivak for the geometric side of the topic (which I have done).
 
  • #149
two questions

i hope no one has asked them:
1. what is quantum group theory and with what does it deal with?
2. what are the differences between simple group theory and quantum group theory?
 
  • #150
Take a look at

http://www.maths.qmw.ac.uk/~majid/bkintro.html
 
Last edited by a moderator:
  • #151
Originally posted by Lonewolf
Take a look at

http://www.maths.qmw.ac.uk/~majid/bkintro.html

Hello Lonewolf and LoopQG,
I remember looking at some of majid's pages and getting the
impression that he was pushing his book, understandably, and not revealing very much of the subject matter. I may have missed something but I came away dissatisfied.

There is an australian account
http://www-texdev.mpce.mq.edu.au/Quantum/Quantum/Quantum.html
I cannot recommend it, except to say that it tries to be a regular online book about quantum groups. It is not selling anything, but is giving it away free.

I am not recommending that anyone try to learn quantum groups either--since it seems arcane: up in the realms of Category Theory and Hopf Algebras.

But there is a nagging fascination about the subject. There is this parameter "q" which if it is very close to zero the quantum group is almost indistinguishable from a group. And one hears things, like:

In cosmology there is an extremely small number which is
1.3 x 10-123 and is the cosmological constant
(a gossamer-fine energy density thoughout all space) expressed in natural units.
In one of his papers John Baez suggested that if you take q = the cosmological constant and use a quantum group tweaked by q instead of a usual group then something works that wouldn't if you used the usual group.
Tantalizing idea, that something in nature might deviate from being a straightforwards symmetry group by only one part in
10123.

I hate to be a name-dropper but quantum groups come up in the context of Chern-Simons q. field theory. Just another straw in the wind.


On another topic altogether, sometimes people say "quantum group theory" to mean simply ordinary Lie Groups etc. applied to quantum physics! That is "quantum group theory" is just the group theory that one employs in quantum mechanics and the like. These then are true groups---good solid law-abiding citizens of group-dom, just doing their job and helping physics out.

But what the folk in High Abstract Algebra call a "quantum group"
is a different kettle of fish. Those babies don't even have a group inverse---instead they have something that is almost but not quite an inverse called an "antipode". Make sure you still have your wristwatch after you shake hands with one of them.
 
Last edited by a moderator:
  • #152
Make sure you still have your wristwatch after you shake hands with one of them.

Not to mention your internal organs...

Where do you recommend the thread should go from here? Are we at a stage where we can start applying some of what we've covered, or not?
 
  • #153
Originally posted by Lonewolf
Not to mention your internal organs...

that had me laughing out loud, unfortunately it does seem
to have an element of truth---"quantum groups" proper does
seem a mathematically quite advanced subject.

Originally posted by Lonewolf
Where do you recommend the thread should go from here? Are we at a stage where we can start applying some of what we've covered, or not?

I defer to Hurkyl. If his job allows him time to think of a possible sortie we could make from here, and he wants to initiate it, then it will happen.

Or, as you have done in the past, you could try asking a specific question...
 
  • #154
That's part of my worry too, this is a point in a subject where I like to start applying ideas to some simple problems, but I don't know what to do!

I think we can explain what a spinor is, though, at this point, and thus better understand the idea of spin. (I don't know if y'all know this backwards and forwards yet, but I've not seen it rigorously presented) I need a break from the differential geometry aspect anyways, so I'll figure this out. :smile:

Edit: we might need representation theory first for spinors too. :frown:
 
Last edited:
  • #155
Originally posted by Hurkyl
...and thus better understand the idea of spin. (I don't know if y'all know this backwards and forwards yet, but I've not seen it rigorously presented)...:smile:

great
take it for granted we don't (know it b.&f. yet) and that we want to

go for it!

page 71 of Hall says why there is an irrep of SU(2)
for every integer m
(on a space of dimension m+1)

only a minor amount of sweat and we have how idea of
spin comes out of irred. reps.

(physicists always divide that number m by 2 and
catalog the irreps in half-integers but allee-samee integers)

sounds good to me
 
  • #156
Ok, brief interlude back to differential geometry!

Recall that we were interested in proving that [g, h] = (ad g)(h) satisfied the definition of a Lie bracket.

It finally struck me that I wasn't giving enough emphasis to the group structure of a lie group, and I was trying to be too abstract in the geometrical aspect and wasn't using the calculus we all know and love on Rn!


So let's see how we synthesize these two familiar concepts!


The defining characteristic of a differential manifold is that it is locally diffeomorphic to Rn. Let us select for our Lie Group G a neighborhood U of the identity element E and a diffeomorphism &phi; mapping U to Rn. Since the group operations are continuous, if we focus our attention on points near the identity, we can keep all of our manipulations within U, and thus in the domain of &phi;. Also, I will insist that &phi;(E) = 0.

Now, how do we export the group structure from G to Rn? By taking the axioms of a group and exporting them via &phi;! In particular, I will define the two operations:

f(x, y) = &phi;(&phi;-1(x) * &phi;-1(y))
g(x) = &phi;(&phi;-1(x)-1)

f(x, y) is the Rn interpretation of multiplication, and g(x) is the Rn interpretation of inversion.

We can import the group structure by encoding associativity and identity

f(x, f(y, z)) = f(f(x, y), z)
f(x, gx) = f(gx, x) = 0
f(x, 0) = f(0, x) = x

And now we have moved everything into Rn and can proceed with what we learned from (advanced) calculus text!


Before I proceed, I will have to introduce the notation I will use; I've found in the past that generalizing this notation for scalars for use with vectors has been extraordinarily useful.

f1(a, b) is the differential of f(x, y) at (a, b) where we are holding y constant.


Before I proceed with the proof, first some preliminary results:

f(x, 0) = x : differentiate WRT x
f1(x, 0) dx = dx
f1(x, 0) = I (I for matrix identity)

similarly, f2(0, x) = I

If we differentiate WRT x again, we get:

f11(x, 0) dx = 0 = f22(0, x) dx
f11(x, 0) = 0 = f22(0, x)

(note: I use rank 3 tensors in this proof, such as these second partials, and they worry me because, while I think I know how they behave, I've never used them in this type of proof! Really all I need is that they are 3-dimensional arrays of numbers)

Also, we need to know dg(0):

f(x, gx) = 0 : differentiate WRT x
f1(x, gx) dx + f2(x, gx) dg(x) dx = 0
f1(0, 0) + f2(0, 0) dg(0) = 0
I + I dg(0) = 0
dg(0) = -I


Now, recall how (ad x)(y) was defined: We started with (Ad G)(H) = GHG-1, then we "differentiated" WRT H to get a function (Ad G)(h) acting on the tangent space, and then we differentiate again WRT G to get the function (ad g)(h). Now that we live in Rn, we can actually carry out this operation!

start with

f(f(x, y), gx)

holding x constant, we take the differential @ y = 0, yielding

f1(f(x, 0), gx) f2(x, 0) dy
= f1(x, gx) f2(x, 0) dy

now take the differential @ x = 0, yielding

(f11(0, g0) dx + f12(0, g0) dg(0) dx) f2(0, 0) dy + f1(0, g0) (f21(0, 0) dx) dy

Using the formulae we derived above and the associativity of tensor product:

-f12(0, 0) dx dy + f21(0, 0) dx dy

And using the anticommutativity of differential forms:

f21(0, 0) [dx, dy]

So we see that in Rn-land, (ad g)(h) is simply f21(0, 0) times the commutator of the corresponding differential forms, so it is clear (ad g)(h) satisfies the axioms of a lie bracket!


(I feel like I've skipped over a few too many details, and probably a few opportunities for profound observations, but I just can't see where...)
 
Last edited:
  • #157
Grr, I found one of my mistakes! Differential forms are cotangent vectors, not tangent vectors. :frown:

This disturbs me; I've always pictured differential forms as infinitessimal displacements, and that's what tangent vectors are supposed to represent...

Anyways, I'm eager to get onto representations, have you been preparing to post something Marcus, or should I work on that?
 
  • #158
Originally posted by Hurkyl


Anyways, I'm eager to get onto representations, have you been preparing to post something Marcus, or should I work on that?

I haven't been. And I have been hoping you would start the ball rolling. I am ever ready to try my hand at whatever lemmas checks and homeworks you propose. This has been rather fun so far, so I hope you continue.

(however always remember we are free to drop it anytime for any reason---its not as if we signed a contract! :wink: )
 
  • #159
I feel comfortable with the notion of tangent vectors, but I haven't got any references that I can find for cotangent vectors. Any one care to explain, please?
 
  • #160
Originally posted by Lonewolf
I feel comfortable with the notion of tangent vectors, but I haven't got any references that I can find for cotangent vectors. Any one care to explain, please?

this is so important that we can do with several explanations, so I will offer one. But I hope to hear Hurkyls account of the same business.

there is a terrible fundamental and easy thing in math called the dual
of any vectorspace
WHENEVER you have any kind of vectorspace or any sort at all
(any set that satisfy those few obvious criteria that vectorspaces have to meet, mainly that there is a sensible way to add two of them and one or two other things like that)
whenever you have ANY vectorspace

then you can define another vectorspace called its dual which is just the linear functions defined on the first one
or, as one says with a certain panache, the linear "functionALs".
If it is a real vector space then a linear functional is just any real-valued function defined on the mother that happens to
be linear

f(x + y) = f(x) + f(y), and all that

I have to go, a friend just telephoned. But anyway I think the
"cotangent" space is just some specialized jargon for the dual of the tangent space------and the jargon is going to snowball: in a couple of seconds we are going to call it the space of "1-forms" too. It is actually exciting because of a bunch of geometrical meanings that emerge and what mathematicians do when they get excited is make up more and more names to call things. You can hear a rising hum of jargon and you know it is going to be a good party.
 
  • #161
The simplest example of dual vectors comes from our old friend Rn

Ordinary vectors are considered to be column vectors.
Dual vectors are row vectors.

For example, if I take the gradient of the scalar function f(x), I get a row vector. If I then postmultiply the gradient by an ordinary vector (with ordinary matrix arithmetic), the result is a real number (the directional derivative).


As marcus said, dual vectors are all real-valued linear functions on vectors. Similarly, mod the appropriate isomorphism, vectors are real-valued linear functions on dual vectors.


However, because Rn has the euclidean metric, we can convert freely between vectors and dual vectors, so the difference between the two is often underemphasized or even ignored because we have a nice isomorphism between the two (the transpose map). We even have the audacity to use the transpose map to allow us to write bilinear functions as matrices!


Differential one-forms are dual vectors to tangent vectors (thus we call them cotangent vectors); to put it simply, they tell you how to convert the tangent vector to a curve into a number... for instance, in the (x, y) plane, dx means take the x-coordinate of the tangent vector.
 
  • #162
Originally posted by Hurkyl

Differential one-forms are dual vectors to tangent vectors (thus we call them cotangent vectors); to put it simply, they tell you how to convert the tangent vector to a curve into a number... for instance, in the (x, y) plane, dx means take the x-coordinate of the tangent vector.

At this point we could move this part of the conversation over to the "differential forms" thread that Lethe initiated if we wanted, and have two conversations:

one about differential geometry (at basic intro level suitable to our novice)
and one about matrix groups and reps and the like.
we are blessed, after all, with two sticky threads


the one thing I have a problem with in the diff form thread is that Lethe uses codes for symbols which my browser sees as boxes.
When I read a page by Lethe I see a lot of boxes.

I don't want to update my browser because of being a technophobe stick-in-the-mud. I only change habits and software gradually and I am not ready to make a big (for me) change just for one person's posts.

So I would suggest using the symbols that Greg posted and just using capital Lambda for wedge like & Lambda ; makes &Lambda;
and sigma wedge tau is written &sigma;&Lambda;&tau;

but if you dont, and I see boxes, then I will just cope with it somehow---no big deal.
 
  • #163
You can have two browsers installed on your computer, you know.

Though I think you just need to update your fonts.
 
  • #164
why so much fuss about the dual

I put myself in Lonewolf shoes and I think
well the idea of dual of some vectorspace-----the space of linear functionals on the space----is extremely simple almost idiotically
simple

the puzzling thing is why make such a fuss

novices sometimes have this problem----they understand the idea but are baffled why mathematicians get so excited about it


there actually are reasons


and it is the same in the special case of the tangent space and ITS dual (the socalled cotangent space). Like, why even bother?

but there really are reasons, not only is there nice mathematics that grows out of these things but more urgently a whole bunch of physical concepts ARE linear functionals (and jacked up cousins called multilinear functionals) on the tangent space

Pretty much any physical quantity that has a "per" in its name.

flux is flow per patch of area (which two tangent vectors describe)

charge density is charge per element of volume (which three tangent vectors describe)

wavenumber?---change of phase or number of cycles associated with an infinitesimal move in some direction (which a tangent vector describes)

maybe the magnetic field? it is a linear response to a moving charge---perhaps all these examples are wrong but I believe that correct physical examples would be easy to get.

So suppose you want to be free take physical ideas over onto manifolds---to go places that don't already have an established Euclidean metric like R3. then you don't always have the easy equivalence between row vectors and column vectors. You have to keep track of what is a tangent vector and what is a function OF tangent vectors.

Lethe may have already given motivation for differential forms in the other thread, I haven't read it all and don't remember. But anyway linear functions of various kinds built on the tangent space are good for physics and handy to have around.

Subscripts and superscripts make some people break out in hives, but Lethe I seem to recall, was trying to use hypoallergenic notation that avoided the worst infestations of these notational crablice.
 
  • #165
I have been trying to avoid use of the idea of coordinate charts in this thread for the very same reason. :smile:


Anyways, on to what a representation is!



To make a representation of a group G is to find some vector space V and define a group action for G on V such that the action is a (invertible) linear transformation. (As Hall puts it, it's a homomorphism from G into GL(V))

The fact we are working with matrix lie groups somewhat obscures the profoundness of this idea; after all a matrix lie group is, by definition, a group acting linearly on a vector space!


IMHO it pays now to think about lie groups in general for a moment. How can we get a general lie group to act linearly on a vector space?

Well, we've already found a way for a lie group to act on its lie algebra (which is a vector space); the adjoint mapping (Ad G)... however this is far too unambitious!

What about the tangent vector fields over a lie group? We know how to act on those by left multiplication! Specifically, if v(x) is a tangent vector field, then:

(Gv)(x) = v(Gx)

this is clearly a linear action, so the lie group action on its tangent vector fields is a representation, and this one is pretty interesting (by interesting I mean that it is more complex than the obvious case)! (Is it clear that the dimension of the space of all tangent vector fields is infinite?)


But this vector space is a little "too big"; this representation is reducible, meaning that there is a subspace of the vector space that is left unchanged by any element of G; in particular, the left invariant vector fields we constructed earlier. However, that's no matter; that's a finite dimensional subspace and if we mod it out, what's left is still interesting!


So we see that all lie groups have interesting representations, but do they have any useful ones?


Well, allow me to construct one! We know that Maxwell's equations are all sphericially symmetric, correct? Any rotation of a solution is another solution. So we know that elements of SO(3) act on the solutions to Maxwell's equations. However, rotations are linear; so we have found that the solutions to Maxwell's equations form a representation of SO(3)!

IIRC this last idea is the original reason Lie Groups were invented! If we can find the symmetry group of a set of differential equations, we know that the solutions to those equations must form a representation of the symmetry group! (is it obvious the symmetries of a DiffEq act linearly? or am I missing something?)


Some rote definitions:

An invariant subspace of a representation of a group G acting on V is a subspace S of V such that GS = S for all G in G.

An invariant subspace S of a vector space V is called trivial if it is all of V or if it is simply the zero vector. It is non-trivial otherwise.

A representation is real if the underlying vector space is a real vector space. Similarly for complex.

A representation is faithful if Gx = Hx for all x implies G = H.

If G acts on two vector spaces V and W, then a morphism between representations is a morphism (linear transformation) from V to W that commutes with group action. That is, &phi;(Gx) = G&phi;(x)

A morphism is an isomorphism if its invertible.


A unitary representation of a group G is one where the vector space is a Hilbert space, the group actions are unitary actions, and strong continuity holds; if the sequence An converges to A when viewed as elements of a lie group, then An converges to A when viewed as unitary operators.

(According to Hall, examples of something that violates strong continuity are difficult to come by)


All of the above holds for representations of a lie algebra as well (except for the unitary representation), except that the lie algebra action doesn't have to be invertible. (it's mapped into gl(V))


Phew, that's a lot to digest, any questions?
 
  • #166
Originally posted by Hurkyl


Phew, that's a lot to digest, any questions?

It all seems fine, no questions. Looking forward to whatever comes next.

Originally posted by Hurkyl

All of the above holds for representations of a lie algebra as well (except for the unitary representation), except that the lie algebra action doesn't have to be invertible. (it's mapped into gl(V))

That is, I think I understand what was about groups. I will think about how this all carries over to Lie algebras...right now I don't see any questions about that part either.

Good point about the solutions to a set of equations being a representation of their symmetry. So the crafty physicist tries to understand what all the possible representations of a symmetry group can be as he fervently hopes to avoid ever having to solve systems of partial differential equations.

One time I did a google search with "group representation" and found a John Baez piece (apparently co-written with a character named Oz) which motivated the subject somewhat along these same lines---I don't remember the details but it was entertaining.

Anyway I'm eager to see what comes next.
 
  • #167
Baez is a character. :smile: Have you read his GR tutorial?
 
  • #168
Originally posted by Hurkyl
Baez is a character. :smile: Have you read his GR tutorial?
He has a tutorial on GR which I have read, called "The Meaning of Einstein's Equation", or something like that. he rewrites the equation in an intuitive form involving the volume of a blob of test particles in free fall. I liked the tutorial and have recommended it. But you may be referring to something else which I haven't seen. If so let me know about it---always happy to check out a Baez page.
 
  • #170
Originally posted by Hurkyl
http://math.ucr.edu/home/baez/gr/gr.html

"Oz and the Wizard"

thanks, I will take a look
cant say much off topic here because of not wanting
the thread to wander but will start a new thread soon
probably, to let you know what I'm reading---
has to do with representations of *-algebras
for example:
http://arxiv.org/gr-qc/0302059

another paper has a theorem to the effect that
"the Ashtekar-Isham-Lewandowski representation of
the [a certain LQG analog of the Weyl algebra] is irreducible"
http://arxiv.org/gr-qc/0303074
 
Last edited by a moderator:
  • #171
So we've seen some examples of the representations of a group, let's look at some examples of the representations of the lie algebra.

A lie algebra representation is a morphism from a lie algebra g into the algebra of linear transformations on a vector space V. Being a morphism means that it must preserve lie bracket. Specifically,

&phi;([A, B]) = &phi;(A)&phi;(B) - &phi;(B)&phi;(A)

The product on the algebra of linear transformations is, of course, composition.


Just like matrix lie groups, both the trivial space {I} and the matrix lie algebra itself are representations of a matrix lie algebra.

What about generic lie algebras? Well, just like we did with the lie group, we can make the lie algebra act on the tangent vector bundle of the lie group! Because lie groups are parralelizable, we can write the tangent bundle as Gxg, and then for any lie algebra element g and tangent vector field v we have

(gv)(x) = ([nab]gv)(x)

Note this corresponds to the idea of a lie algebra element as an infinitessimal translation; suppose g is the tangent vector to the curve G(s) in G. Then:

gv(x) = ([nab]gv)(x)
= limh&rarr;0 (v(x + hg) - v(x))/h
= limh&rarr;0 (v(x + hg + O(h2)) - v(x))/h
= limh&rarr;0 (v(G(h)x) - v(x))/h
= limh&rarr;0 ((G(h)v)(x) - v(x))/h

so gv is related to Gv in the way we expect; gv is the derivative of Gv wrt G!


In general, for any representation of G acting on a vector space, we can induce a representation of g in the same way:

If g is the tangent vector to G(h):
gv := (d/dh) (G(h)v) @ h = 0


From this, we can actually write g as a directional derivative field! We have:

gv(x) = ((d/dh) (G(h)v) @ h = 0)(x)
= (d/dh) v(G(h)(x)) = dv (d/dh)(G(h)(x))

So at each point x, gv(x) is simply the derivative of v in the direction tangent to G(h)(x) at h = 0.


P.S. how do I make [nab] as a character instead of a smiley?
 
Last edited:
  • #172
I understand the notation and can follow this pretty well. It is amazing how much can be done with the available symbols. I understand your use of the @ sign. Am glad to see the nabla (did not know we had [nab]).

I have changed a v to g in a couple of the following equations. Marked them with (*). Is this right?

Originally posted by Hurkyl
...then for any lie algebra element g and tangent vector field v we have

(gv)(x) = ([nab]gv)(x)

Note this corresponds to the idea of a lie algebra element as an infinitessimal translation; suppose g is the tangent vector to the curve G(s) in G. Then:

gv(x) = ([nab]gv)(x)
= limh&rarr;0 (v(x + hg) - v(x))/h &nbsp;&nbsp;(*)
= limh&rarr;0 (v(x + hg + O(h2)) - v(x))/h&nbsp;&nbsp;(*)
= limh&rarr;0 (v(G(h)x) - v(x))/h
= limh&rarr;0 ((G(h)v)(x) - v(x))/h

so gv is related to Gv in the way we expect; gv is the derivative of Gv wrt G!
 
Last edited:
  • #173
Lol, yes. Can you tell that I had originally used 'v' for the tangent vector in my scratchwork? :smile:
 
  • #174
Originally posted by Hurkyl
Lol, yes. Can you tell that I had originally used 'v' for the tangent vector in my scratchwork? :smile:

In fact i thought it was something like that. BTW should say
that your running a basic group rep sticky in the background
has been personally beneficial for me in several ways, should say thanks sometimes

the main way that comes to mind is that it raises my consciousness of the essence of any quantum theory.
In any quantum theory, it seems to me, the ALGEBRA of
observables, and their more general operator friends, acts
on the HILBERTSPACE of quantum states.

nature seems to smile and beckon when people set things up this way. people get lucky and discover things and publish lots of papers when they set things up this way.

it is down at the level of superstition that we believe this is the right way to do something which maybe we still do not completely understand but nevertheless think we ought to do

so quantum theory of any sort is a theory of operators acting on a vectorspace, usually a C* algebra of operators acting on a hilbert space---that is, a representation theory
 
Last edited:
  • #175
Bah, this is going to be a little terser than I wanted to make it; I need to stop debating in that Zeno's paradox thread, I spend too much time on it.


While representations of groups have to be on vector spaces, groups can act on all sorts of things. For example, SO(3) acts faithfully on any origin-centered sphere in R3. More generally, we can take any representation of G and consider each orbit of the vector space as a set upon which G acts. (if an orbit is a nontrivial subspace, then the action of G on the whole vector space is reducible)

We can build representations out of these sets by considering vector fields on them. So, for example, we have a representation of SO(3) by considering any scalar field on the unit sphere.



We can make representations out of representations. The tensor product of two lie groups is a lie group. The group action is

(a, b) (c, d) = (ac, bd)

and the lie algebra of the product is the product of the lie algebras. If we have a representation of G and a representation of H, then the tensor product of the representations is a representation fo the tensor product of the groups.


Alternatively, we can take two representations of the same group G, and then the tensor product of the representations is another representation of G. The action is given by

G(a, b) = (Ga, Gb)

An interesting example of this is vector fields on R3, with group SO(3). We can pretend (I think) that vector fields are surfaces in the tensor product of R3 with its tangent space (which is just itself), and then elements of SO(3) act by simultaneously rotating the space and rotating the vectors.
 

Similar threads

  • Linear and Abstract Algebra
Replies
3
Views
783
  • Linear and Abstract Algebra
Replies
7
Views
1K
  • Linear and Abstract Algebra
Replies
10
Views
360
  • Linear and Abstract Algebra
Replies
17
Views
4K
  • STEM Academic Advising
2
Replies
43
Views
4K
  • Science and Math Textbooks
Replies
9
Views
2K
  • STEM Academic Advising
Replies
7
Views
2K
  • Linear and Abstract Algebra
Replies
3
Views
2K
  • Linear and Abstract Algebra
Replies
1
Views
905
  • STEM Educators and Teaching
2
Replies
35
Views
3K
Back
Top