Explore Geometry of Symmetric Spaces & Lie Groups on PF

  • #51
garrett said:
The algebra of vectors and forms at a manifold point, spanned by the coordinate basis elements \vec{\partial_i} and \underrightarrow{dx^i}, are completely independent from the algebra of Clifford elements, spanned by \gamma_\alpha, or, if you like, they're independent of all Lie algebra elements. By the algebra being independent, I mean that all elements commute.

Forget the lie algebra for the moment. I'm talking about the basis elements \partial_i \equiv \frac{d}{dx^i} and their dual one-forms. In your notation you put an over arrow over the top indicating that we are dealing with a complete vector, i.e. e_i \equiv \vec{\partial_i}. You then said that they obey an anti-commutation rule: e_i e_j = -e_j e_i.

So, my question was about the kind of product that you are using between these elements. In general the product of two vectors carries a symmetric and an antisymmetric part: e_i e_j = e_i \cdot e_j + e_i \wedge e_j, and it is only the antisymmetric part which anti-commutes. However if you are explicitly working in an orthonormal basis they what you say is correct, unless i=j in which case the two commute.
 
Physics news on Phys.org
  • #52
garrett said:
The expression you calculated,
<br /> g(x) = e^{x^i T_i} = \cos(r) + x^i T_i \frac{\sin(r)}{r}<br />
is a perfectly valid element of SU(2) for all values of x. Go ahead and multiply it times its Hermitian conjugate and you'll get precisely 1.

Sure I get that, but the series expansions we use are only valid for small x, for instance substitute 4\pi into the series expansion and it doesn't work anymore...

Mehdi_ said:
It is related to the condition, {({x^1})^2 + ({x^2})^2+ ({x^3})^2}=1

Whilst we're here, where does the condition come from? I thought that g g^- might impose some condition on the x's, but it doesn't. Where does it come from? :)
 
  • #53
garrett said:
Because I missed that term! You're right, I thought those would all drop out, but they don't -- one of them does survive. ( By the way, becuase of the way I defined <> with a half in it, it's &lt; T_i T_j T_k &gt; = \epsilon_{ijk} ) So, the correct expression for the inverse Killing vector field should be
<br /> \xi^-_i{}^B = - &lt; \left( (T_i - x^i) \frac{\sin(r)}{r} + x^i x^j T_j ( \frac{\cos(r)}{r^2} - \frac{\sin(r)}{r^3}) \right) \left( \cos(r) - x^k T_k \frac{\sin(r)}{r} \right) T_B &gt;<br />
<br /> = \delta_{iB} \frac{\sin(r)\cos(r)}{r} + x^i x^B ( \frac{1}{r^2} - \frac{\sin(r)\cos(r)}{r^3} ) + \epsilon_{ikB} x^k \frac{\sin^2(r)}{r^2}<br />

What happened to the x^i x^j x^k \epsilon_{jkB} (\cos(r)/r^2 - \sin^2(r)/r^4) term?

p.s. it looks like the right-invariant vectors are just minus the left-invariant ones.
 
Last edited:
  • #54
garrett said:
<br /> \underrightarrow{e} = \underrightarrow{e^\alpha} \gamma_\alpha = \underrightarrow{dx^i} \left( e_i \right)^\alpha \gamma_\alpha<br />

What kind of object is e_\alpha ?, and what kind of object is \gamma_\alpha ?
Are you using upper and lower arrows to purely signify differential geometry objects? Why not arrows on the gamma too; I take it that this is a vector (as apposed to a dual vector)?
 
Last edited:
  • #55
The e_\alpha are the "legs" of the vierbien or frame; four orthonormal vectors based at a typical point of the manifold. I think the \gamma_\alpha are just multipliers (bad choice of notation; they look too d*mn much like Dirac matrices).
 
  • #56
Taoy said:
I'm talking about the basis elements \partial_i \equiv \frac{d}{dx^i} and their dual one-forms. In your notation you put an over arrow over the top indicating that we are dealing with a complete vector, i.e. e_i \equiv \vec{\partial_i}. You then said that they obey an anti-commutation rule: e_i e_j = -e_j e_i.

So, my question was about the kind of product that you are using between these elements. In general the product of two vectors carries a symmetric and an antisymmetric part: e_i e_j = e_i \cdot e_j + e_i \wedge e_j, and it is only the antisymmetric part which anti-commutes. However if you are explicitly working in an orthonormal basis they what you say is correct, unless i=j in which case the two commute.

My justification for creating this algebra in which tangent vectors anti-commute is this: when you contract two tangent vectors with a 2-form, the sign changes depending on the order you do the contraction:
<br /> \vec{a} \vec{b} \left( \underrightarrow{c} \underrightarrow{f} \right)<br /> = - \vec{b} \vec{a} \left( \underrightarrow{c} \underrightarrow{f} \right)<br />
This fact is standard differential geometry for the inner product of two tangent vectors with a 2-form. I merely elevate this fact to create an algebra out of it, and it motivates my notation. Since the two vectors are contracting with a 2-form, which is anti-symmetric, this "product" of two vectors is also necessarily anti-symmetric. If you like, you need not even consider it a product -- just two tangent vectors being fed to a 2-form in succession. :) That is the conventional interpretation.
 
  • #57
Taoy said:
Sure I get that, but the series expansions we use are only valid for small x, for instance substitute 4\pi into the series expansion and it doesn't work anymore...

I think the series expansion for the exponential is an exact equality as long as we keep all terms in the infinite series, which we do. I think it's correct for 4pi, though these x's should be periodic variables, inside the range 0 to 2pi.

Whilst we're here, where does the condition come from? I thought that g g^- might impose some condition on the x's, but it doesn't. Where does it come from? :)

There is no restriction like that on the x coordinates -- best to forget he said that. (I believe he was making an analogy at the time.)
 
  • #58
Taoy said:
What kind of object is e_\alpha ?, and what kind of object is \gamma_\alpha ?
Are you using upper and lower arrows to purely signify differential geometry objects? Why not arrows on the gamma too; I take it that this is a vector (as apposed to a dual vector)?

<br /> \underrightarrow{e^\alpha} = \underrightarrow{dx^i} \left( e_i \right)^\alpha<br />
is one of the orthonormal 1-form basis elements (indexed by \alpha), dual to the corresponding member of the basis of orthonormal tangent vectors.
<br /> \left( e_i \right)^\alpha<br />
are the frame coefficients (aka vielbein coefficients).

<br /> \gamma_\alpha<br />
is one of the Clifford algebra basis vectors.

Yes, I put arrows over tangent vectors, arrows under forms, and no arrows under or over coefficients or Lie algebra or Clifford algebra elements such as \gamma_\alpha . The number of arrows in an expression is "conserved" -- with upper arrows cancelling lower arrows, via vector-form contraction. If some object has a coordinate basis 1-form as part of it, which has an under arrow, then that object also gets an under arrow.
 
Last edited:
  • #59
Hi SA!

Have you looked around the new wiki yet? It was somewhat inspired by some comments we exchanged in another forum. :)

selfAdjoint said:
The e_\alpha are the "legs" of the vierbien or frame; four orthonormal vectors based at a typical point of the manifold.

Yes, but I'm careful to distinguish the vierbein and inverse vierbein, using arrow decorations. The orthonormal basis vectors are
<br /> \vec{e_\alpha} = \left(e^-_\alpha\right)^i \vec{\partial_i}<br />
while the frame, or vierbein, 1-forms are
<br /> \underrightarrow{e^\alpha} = \left(e_i\right)^\alpha \underrightarrow{dx^i} <br />
They satisfy
<br /> \vec{e_\alpha} \underrightarrow{e^\beta}<br /> = \left(e^-_\alpha\right)^i \vec{\partial_i} \left(e_j\right)^\beta \underrightarrow{dx^j}<br /> = \left(e^-_\alpha\right)^i \left(e_j\right)^\beta \delta_i^j<br /> = \left(e^-_\alpha \right)^i \left(e_i\right)^\beta<br /> = \delta_\beta^\alpha<br />

I think the \gamma_\alpha are just multipliers (bad choice of notation; they look too d*mn much like Dirac matrices).

It is a great choice of notation because they're Clifford vectors, which ARE represented by Dirac matrices. :) The same way SU(2) generators are represented by i times the Pauli matrices. You will do perfectly well thinking of \gamma_\alpha as Dirac matrices if you like. (But one doesn't need to -- the same way one can talk about the su(2) Lie algebra without explicitly using Pauli matrices.)

Good to see you over here.
 
  • #60
Taoy said:
What happened to the x^i x^j x^k \epsilon_{jkB} (\cos(r)/r^2 - \sin^2(r)/r^4) term?

It's zero.
x^j x^k \epsilon_{jkB} = 0

p.s. it looks like the right-invariant vectors are just minus the left-invariant ones.

I'll go check.
 
  • #61
Taoy said:
p.s. it looks like the right-invariant vectors are just minus the left-invariant ones.

Close, but that's not what I just got. Check all your signs.
( Or I'll have to wake up tomorrow morning and find out I need to check mine. ;)
 
  • #62
Originally Posted by Taoy
Whilst we're here, where does the condition {({x^1})^2 + ({x^2})^2+ ({x^3})^2}=1 come from?



In this forum we have succefully showed the double covering of SO(3) by SU(2).
SO(3) is the group of rotation in 3 dimentions.
But a rotation can be represented either by as orthogonal matrices with determinant 1 or by axis and rotation angle
or via the unit quaternions and the map 3-sphere to SO(3) or Euler angles.

Let's chose quaternions...

Every quaternion z = a + bi + cj + dk can be viewed as a sum a + u of a real number a
(called the “real part” of the quaternion) and a vector u = (b, c, d) = bi + cj + dk in R^{3} (called the “imaginary part”).

Consider now the quaternions z with modulus 1. They form a multiplicative group, acting on R^{3}.

Such quaternion can be written z = \cos(\frac{1}{2} \alpha)+\sin(\frac{1}{2} \alpha)\xi
which look like joe equation U = e^{\frac{1}{2} B} = \cos(\frac{1}{2} \theta) + b \sin(\frac{1}{2} \theta)

and \xi being a normalized vector... Does Lie group generators normalized !?

Like any linear transformation, a rotation can always be represented by a matrix. Let R be a given rotation.
Since the group SO(3) is a subgroup of O(3), it is also Orthogonal.
This orthonormality condition can be expressed in the form

R^T R = I

where R^T denotes the transpose of R.


The subgroup of orthogonal matrices with determinant +1 is called the special orthogonal group SO(3).
Because for an orthogonal matrix R: det(R^T)=det R which implies (det R)^2=1 so that det R = +1 or -1.


But The group SU(2) is isomorphic to the group of quaternions of absolute value 1, and is thus diffeomorphic to the 3-sphere.
We have here a map from SU(2) onto the 3-phere (then parametrized the coordinates by means of angles
\theta and \phi) (spherical coordinates)

Actually unit quaternions and unit 3-phere S(3) described almost the same thing (isomorphism).


Because the set of unit quaternions is closed under multiplication, S(3) takes on the structure of a group.
Moreover, since quaternionic multiplication is smooth, S(3) can be regarded as a real Lie group.
It is a nonabelian, compact Lie group of dimension 3.

A pair of unit quaternions z_l and z_r can represent any rotation in 4D space.
Given a four dimensional vector v, and pretending that it is a quaternion, we can rotate the vector v like this:z_lvz_r

By using a matrix representation of the quaternions, H, one obtains a matrix representation of S3.
One convenient choice is :


x^1 + x^2i + x^3j + x^4k = \left[\begin{array}{cc}x^1+i x^2 &amp; x^3 + i x^4\\-x^3 + i x^4 &amp;x^1-i x^2\end{array}\right]

which can be related of some sort !? to Garrett matrix...

T = x^i T_i = \left[\begin{array}{cc}i x^3 &amp; i x^1 + x^2\\i x^1 - x^2 &amp; -i x^3\end{array}\right]



Garrett, I have 2 question for you:
What is the website of your last publications (quaternions and others) ?
and
Since unit quaternions can be used to represent rotations in 3-dimensional space (up to sign),
we have a surjective homomorphism from SU(2) to the rotation group SO(3) whose kernel is { + I, − I}.
What does "whose kernel is { + I, − I}" mean ?
 
Last edited:
  • #63
selfAdjoint said:
The e_\alpha are the "legs" of the vierbien or frame; four orthonormal vectors based at a typical point of the manifold. I think the \gamma_\alpha are just multipliers (bad choice of notation; they look too d*mn much like Dirac matrices).

No, the \gamma_i's are actually clifford vectors. Interestingly in spaces with signatures (3,1) we'll see that these clifford gamma elements have an identical algebra to the Dirac matrices under the geometric product, which is probably why Garrett calls them gammas in the first place. (Hestenes uses this notation too).
 
  • #64
garrett said:
It's zero.
x^j x^k \epsilon_{jkB} = 0

I really must stop doing this late at night! (: Of course it's symmetric in the x's and antisymmetric in the \epsilon! Doh!
 
  • #65
garrett said:
<br /> \gamma_\alpha<br />
is one of the Clifford algebra basis vectors.

Yes, I put arrows over tangent vectors, arrows under forms, and no arrows under or over coefficients or Lie algebra or Clifford algebra elements such as \gamma_\alpha .

I thought that you wanted to keep elements of the vector space and of the dual space separate and distinct? The clifford algebra elements can be geometrically interpretted as a vector basis, and an arbitary vector expanded in them,

v = v^i \gamma_i = v_i \gamma^i
where
\gamma^i . \gamma_j = \delta^{i}_{j}

Are you less worried about preserving the distinction between \vec \gamma_i and \underrightarrow{\gamma^i} because of the presence of an implied metric?
 
Last edited:
  • #66
Taoy said:
I thought that you wanted to keep elements of the vector space and of the dual space separate and distinct? The clifford algebra elements can be geometrically interpretted as a vector basis, and an arbitary vector expanded in them,

v = v^i \gamma_i = v_i \gamma^i
where
\gamma^i . \gamma_j = \delta^{i}_{j}

Are you less worried about preserving the distinction between \vec \gamma_i and \underrightarrow{\gamma^i} because of the presence of an implied metric?

Yes, that's it exactly.

For any smooth manifold, you always have a tangent vector space at each point spanned by a set of coordinate basis vectors, \vec{\partial_i}. It's also always natural to build the dual space to this one at each point, spanned by the coordinate basis 1-forms, \underrightarrow{dx^i}. By definition, these satisfy
<br /> \vec{\partial_i} \underrightarrow{dx^j} = \delta_i^j<br />
which is an inner product between the two spaces. But there's no metric necessarily around. Mathematicians are smarter and lazier than I am, so they don't bother to write these little arrows like I do -- which I mostly write to remind me what the vector or form grade of a tangent space or cotangent space object is. They always just keep track of this in their heads.

OK, that's it for the two natural spaces (tangent vectors and forms) over any manifold. Now we introduce a third space -- a Clifford algebra. By definition, our Clifford algebra has a nice diagonal metric:
<br /> \gamma_\alpha \cdot \gamma_\beta = \eta_{\alpha \beta}<br />
This is the Minkowski metric when we work with spacetime. It doesn't really work to put any grade indicator over Clifford elements since it is often natural to add objects of different grade. Also, even though it sort of looks like there are two sets of Clifford basis vectors, \gamma_\alpha and \gamma^\alpha, there is really only one set since
<br /> \gamma_\alpha = \pm \gamma^\alpha<br />

I use latin indices (i,j,k,...) for coordinates and the tangent and form basis, and greek indices (\alpha,\beta,...) for Clifford algebra indices to further emphasize the distinction between the two spaces. This is identical to how we have separate coordinate indices and Lie algebra indices (A,B,...) floating around when working with groups.

Clifford algebra, you see, is the Lie algebra of physical space. :)
 
  • #67
I should also explicitly say that many geometric objects, like
<br /> \underrightarrow{e} = \underrightarrow{dx^i} \left( e_i \right)^\alpha \gamma_\alpha<br />
a Clifford valued 1-form, are valued in both the cotangent vector space AND the Clifford algebra space at a manifold point. In this way, the frame, \underrightarrow{e}, can provide a map from tangent vectors to Clifford algebra vectors.

Algebra valued forms, such as this one, were a favorite device of Cartan. And, as we've seen, they're useful in group theory as well as in GR.
 
  • #68
Originally Posted by Mehdi
Since unit quaternions can be used to represent rotations in 3-dimensional space (up to sign),
we have a surjective homomorphism from SU(2) to the rotation group SO(3) whose kernel is { + I, − I}.
What does "whose kernel is { + I, − I}" mean

In this case, kernel is { + I, − I} means that we have a double cover.
The group SO(3) has a double cover SU(2).

Could we then have a kind of quotient ring of this kind ?

\frac{ SU(2) }{(I,-I)} \simeq SO(3)

the kernel { + I, − I} belong then to SU(2) ?
 
Last edited:
  • #69
The kernel of this map from SU(2) to SO(3) is equal to the set of elements of SU(2) that are mapped into the identity element of SO(3). So, yes, these are the elements 1 and -1 of SU(2).

Heh Mehdi, want to take a shot at calculating the Killing vector fields corresponging to the right action of the su(2) Lie generators? Joe almost got them right, but we haven't heard from him in awhile...
 
  • #70
OK for the killing vector field ... I can try...

What about :
\frac{ SU(2) }{(I,-I)} \simeq SO(3)
is it true...
 
  • #71
What is the website of your quaternion publications
 
  • #73
the Killing vector fields corresponding to the right action of the su(2) Lie generators will take me a while... I have no idea about how to do... but you say that Joe almost got them ?
post me here Joe answer ... I will try to study it and after doing some research in internet maybe I will be able to understand how a Killing vector fields could be defined from a group (SU(2))... maybe we have to define the agebra... and the adjoint representation... Lie bracket... all my post now will be related to this question... I hope that it will not take me too much time...;)
 
Last edited:
  • #74
... ok... what follow could be a good start...

the matrix expression for a SU(2) element as a function of SU(2) manifold coordinates is
g(x) = e^{x^i T_i} = \cos(r) + x^i T_i \frac{\sin(r)}{r}
g^- = \cos(r) - x^i T_i \frac{\sin(r)}{r}


\xi_A{}^i \partial_i g = g T_A
g(T_A T_B - T_B T_A) = g ( \xi_A{}^i \partial_i \xi_B{}^j \partial_j - \xi_B{}^i \partial_i \xi_A{}^j \partial_j )
\xi_A{}^i ( \partial_i g^- ) g = T_A
( \partial_i g^- ) g = \xi^-_i{}^A T_A

to be continued tomorrow ... :)
 
Last edited:
  • #75
Garrett calculated the Left Invariant inverse killing vector field:

garrett said:
So, the correct expression for the inverse Killing vector field should be
<br /> \xi^-_i{}^B = - &lt; \left( (T_i - x^i) \frac{\sin(r)}{r} + x^i x^j T_j ( \frac{\cos(r)}{r^2} - \frac{\sin(r)}{r^3}) \right) \left( \cos(r) - x^k T_k \frac{\sin(r)}{r} \right) T_B &gt;<br />
<br /> = \delta_{iB} \frac{\sin(r)\cos(r)}{r} + x^i x^B ( \frac{1}{r^2} - \frac{\sin(r)\cos(r)}{r^3} ) + \epsilon_{ikB} x^k \frac{\sin^2(r)}{r^2}<br />

And the Right Invariant inverse killing field is:

<br /> \xi&#039;^-_i{}^B = - &lt; \left( \cos(r) - x^k T_k \frac{\sin(r)}{r} \right) \left( (T_i - x^i) \frac{\sin(r)}{r} + x^i x^j T_j ( \frac{\cos(r)}{r^2} - \frac{\sin(r)}{r^3}) \right) T_B &gt;<br />
<br /> = \delta_{iB} \frac{\sin(r)\cos(r)}{r} + x^i x^B ( \frac{1}{r^2} - \frac{\sin(r)\cos(r)}{r^3} ) - \epsilon_{ikB} x^k \frac{\sin^2(r)}{r^2}<br />

which we invert to find the Right Invariant Killing vector field,

<br /> \xi_B{}^i = \delta_{Bi} \frac{r \cos(r)}{\sin(r)} + x^B x^i ( \frac{1}{r^2} - \frac{\cos(r)}{r \sin(r)} ) - \epsilon_{Bik} x^k<br />
 
Last edited:
  • #76
Mehdi_ said:
the Killing vector fields corresponding to the right action of the su(2) Lie generators will take me a while... I have no idea about how to do... but you say that Joe almost got them ?
post me here Joe answer ... I will try to study it and after doing some research in internet maybe I will be able to understand how a Killing vector fields could be defined from a group (SU(2))... maybe we have to define the agebra... and the adjoint representation... Lie bracket... all my post now will be related to this question... I hope that it will not take me too much time...;)

OK, this is great Mehdi. I've laid out everything you need to do starting with the beginning of this thread. With links to the wiki to help out. It's going to take a while to understand everything that's happened, and be able to work it out yourself -- but I think that's the best way to learn stuff. That's what this thread has been for. :)

Also, as you try to solve this, please do not post about it here on this thread, as I think everything is laid out already. Feel free to email me privately if you get stuck. And, of course, you can make a celebratory post when you get everything. ;) Once you can do this, which may take a while, you should be able to understand this next stuff we'll do too.

To check you answer for this vector field, you can compare it with the one Joe just posted -- which is the correct one. :)
 
  • #77
Alright, so let's put things together so far. We have two sets of three vector fields over our SU(2) group manifold. The first set corresponds to the Lie algebra generators acting on group elements from the left,
<br /> \xi_A{}^i \partial_i g = T_A g<br />
and, just to confuse things, that's called a "right invariant" vector field since the g acts on T_A from the right. The second set corresponds to the generators acting from the right (and is called a "left invariant" vector field),
<br /> \xi&#039;_A{}^i \partial_i g = g T_A<br />
Using our explicit expression for g in terms of our group manifold coordinates, we were able to explicitly calculate expressions for these two sets of vector fields:
<br /> \xi_B{}^i = \delta_{Bi} \frac{r \cos(r)}{\sin(r)} + x^B x^i ( \frac{1}{r^2} - \frac{\cos(r)}{r \sin(r)} ) + \epsilon_{Bik} x^k<br />
<br /> \xi&#039;_B{}^i = \delta_{Bi} \frac{r \cos(r)}{\sin(r)} + x^B x^i ( \frac{1}{r^2} - \frac{\cos(r)}{r \sin(r)} ) - \epsilon_{Bik} x^k<br />
Each of these six vector fields represents a continuous symmetry of the group manifold -- a way to flow the points of the manifold such that the shape stays the same. (But we don't really know the shape yet, since we haven't said what the metric is. We'll do this next.) We also know a neat trick for calculating the Lie derivative of one vector field with respect to another (the "Lie bracket"):
<br /> ( L_{\vec{\xi_A}} \vec{\xi_B} ) \underrightarrow{d} g = ( \xi_A{}^i \partial_i \xi_B{}^j \partial_j - \xi_B{}^i \partial_i \xi_A{}^j \partial_j ) g<br />
<br /> = (T_A T_B - T_B T_A) g = C_{ABC} T_C g<br /> = C_{ABC} \vec{\xi_C} \underrightarrow{d} g<br />
that implies the Lie bracket of two "right invariant" vector fields gives exactly the Lie algebra structure constants for our group:
<br /> L_{\vec{\xi_A}} \vec{\xi_B} = C_{ABC} \vec{\xi_C} = -2 \epsilon_{ABC} \vec{\xi_C}<br />
This is exactly as it should be. The composition of two flows induced by two symmetries gives us a flow equal to another symmetry, related by the structure constants between the symmetry generators. We could have calculated the Lie brackets between the vector fields explicitly and gotten the same answer, but it would have been a lot more work. We've basically exploited group theory to save us a lot of calculational work -- something theorists do a lot, to great satisfaction. OK, so what about the Lie brackets between the "left invariant" vector fields? The same trick gives
<br /> ( L_{\vec{\xi&#039;_A}} \vec{\xi&#039;_B} ) \underrightarrow{d} g = ( \xi&#039;_A{}^i \partial_i \xi&#039;_B{}^j \partial_j - \xi&#039;_B{}^i \partial_i \xi&#039;_A{}^j \partial_j ) g<br />
<br /> = g (T_B T_A - T_A T_B) = - C_{ABC} g T_C<br /> = - C_{ABC} \vec{\xi&#039;_C} \underrightarrow{d} g<br />
that implies the Lie bracket of two "left invariant" vector fields gives MINUS the Lie algebra structure constants for our group:
<br /> L_{\vec{\xi&#039;_A}} \vec{\xi&#039;_B} = -C_{ABC} \vec{\xi&#039;_C} = 2 \epsilon_{ABC} \vec{\xi&#039;_C}<br />
So, these "left invariant" vector fields don't have the same structure as our Lie algebra, but the structure related by this minus sign.

I'll stop here and give the remaining symmetry relationship as a quick "homework" problem:
What's the Lie derivative of one of the "left invariant" vector fields with respect to one of the "right invariant" vector fields?
<br /> L_{\vec{\xi_A}} \vec{\xi&#039;_B} = ?<br />
 
  • #78
Garrett my guess is:

<br /> L_{\vec{\xi_A}} \vec{\xi&#039;_B} = C_{ABC} \vec{\xi&#039;_C} = -2 \epsilon_{ABC} \vec{\xi&#039;_C}<br />

and

<br /> L_{\vec{\xi&#039;_A}} \vec{\xi_B} = -C_{ABC} \vec{\xi_C} = 2 \epsilon_{ABC} \vec{\xi_C}<br />

Now let's take even more risks (of doing false statements) postulating that...

<br /> L_{\vec{\xi_A}} \vec{\xi&#039;_B} = [{\vec{\xi_A}}, \vec{\xi&#039;_B}]<br />

Could we say then that the adjoint representation and the lie bracket are actually the same thing (homomorphic) ?!

<br /> L_{\vec{\xi_A}} \vec{\xi&#039;_B} = [{\vec{\xi_A}}, \vec{\xi&#039;_B}]=ad({\vec{\xi_A})(\vec{\xi&#039;_B})={ad_{\vec{\xi_A}}{\vec{\xi&#039;_B}}<br />

ad_{\vec{\xi_A}} could be interpreted as a linear transformation of the vector field \vec{\xi_A} that preserves a Lie bracket, [{\vec{\xi_A}}, \vec{\xi&#039;_B}] in this case.

Question 1: Is it true that the adjoint representation of su(2) is so(3)... and that the adjoint representation of su(2) give the structure constants which are also the matrix element of so(3). How so ?!

Question 2: What is the signification of 2 and -2 in -2 \epsilon_{ABC} \vec{\xi&#039;_C} and 2 \epsilon_{ABC} \vec{\xi_C} ?
They are probably structure constants coefficients but are they matrix element... of which matrix ?
 
Last edited:
  • #79
Hey Garrett, I need to clarify your notation a little more :),

garrett said:
<br /> \vec{\xi_A} \underrightarrow{d} g = \xi_A{}^i \partial_i g<br />

What does \underrightarrow{d} mean? There's an ambiguity here; this is not the one-form d_i \underrightarrow{dx^i} with components d_i, it's the exterior derivative operator, \underrightarrow{d} = \underrightarrow{dx^i} \frac{\partial}{\partial x^i}. Ideally one would use a bold d to distinguish between the two.
 
Last edited:
  • #80
Yes, it's the exterior derivative operator,
\underrightarrow{d} = \underrightarrow{dx^i} \frac{\partial}{\partial x^i}
I wrote it simply as d in order to be familiar, but you're right that it's a little confusing that way. In my last paper and in the wiki I write it instead as
\underrightarrow{\partial} = \underrightarrow{dx^i} \frac{\partial}{\partial x^i} = \underrightarrow{dx^i} \partial_i
which is clearer, but non-standard. But, let's go ahead and write it that way from now on. :)

Hey, would you like to offer up the correct answer to the last question about the commutation relations between the two sets of Killing vector fields?
 
  • #81
garrett said:
Hey, would you like to offer up the correct answer to the last question about the commutation relations between the two sets of Killing vector fields?

Sure :). Working on it now.

In the mean time another notation question that comes up when one expands the Lie bracket of two vector fields,

<br /> ({\cal L}_{\vec{X}} \vec{Y}) f = (\vec{X} \vec{Y} f -\vec{X} \vec{Y} f)<br />

Taking the first term,
<br /> \vec{X} \vec{Y} f = \vec{X} (Y^i \vec{\partial i} f)<br />

What's \vec{\partial i} f and how does it relate to \partial i f, that is what is the result of a vector acting on a scalar?

(In the lie bracket expansion in our exercise we act with the lie derivative on \underrightarrow{\partial}g, but this same problem comes up when we act with the second vector field).
 
Last edited:
  • #82
Ah, yes, mathematicians often write a vector operating on a function as \vec{v}f = v^i \partial_i f. I do not write it that way. Instead, I would write the same thing as
<br /> \vec{v} \underrightarrow{\partial} f = v^i \partial_i f<br />
I like to have conservation of arrows in my notation. :)
 
  • #83
garrett said:
Ah, yes, mathematicians often write a vector operating on a function as \vec{v}f = v^i \partial_i f. I do not write it that way. Instead, I would write the same thing as
<br /> \vec{v} \underrightarrow{\partial} f = v^i \partial_i f<br />
I like to have conservation of arrows in my notation. :)

Oooh, so vectors act on scalars the same as vectors act on one-forms?

So how would you conserve the arrows in:

<br /> ({\cal L}_{\vec{X}} \vec{Y}) f = (\vec{X} \vec{Y} f -\vec{X} \vec{Y} f)<br />
 
Last edited:
  • #84
Taoy said:
Oooh, so vectors act on scalars the same as vectors act on one-forms?

Yes, after all, a scalar is just a 0-form.

So how would you conserve the arrows in:
<br /> ({\cal L}_{\vec{X}} \vec{Y}) f = (\vec{X} \vec{Y} f -\vec{X} \vec{Y} f)<br />

That would be
<br /> ({\cal L}_{\vec{X}} \vec{Y}) \underrightarrow{\partial} f<br /> =<br /> (( \vec{X} \underrightarrow{\partial} )( \vec{Y} \underrightarrow{\partial} ) - ( \vec{Y} \underrightarrow{\partial} )( \vec{X} \underrightarrow{\partial} )) f<br />
And, you know, since the Lie derivative of one vector field with respect to another is just another vector field, this just comes from
<br /> {\cal L}_{\vec{X}} \vec{Y}<br /> =<br /> ( \vec{X} \underrightarrow{\partial} ) \vec{Y} - ( \vec{Y} \underrightarrow{\partial} ) \vec{X}<br />
 
Last edited:
  • #85
garrett said:
Taoy said:
Oooh, so vectors act on scalars the same as vectors act on one-forms?
Yes, after all, a scalar is just a 0-form.

I know that :wink:. I meant to say that vectors act on 1-forms and produce a 0-form (by contraction), and visa-versa. However here we have a vector acting on a 0-form also producing a 0-form; that seems strange to me; after all a 0-form acting (multiplying) a vector doesn't produce a 0-form. What am I missing? (I imagine the answer is to do with the abstract nature of tangent vectors, which act on a function - how fundamental is this? i.e. this doesn't happen in, say, geometric algebra.)
 
  • #86
garrett said:
I'll stop here and give the remaining symmetry relationship as a quick "homework" problem:
What's the Lie derivative of one of the "left invariant" vector fields with respect to one of the "right invariant" vector fields?
<br /> L_{\vec{\xi_A}} \vec{\xi&#039;_B} = ?<br />

Ok, I'm sure I've not got the arrows in the right places :), but it looks like they commute, and the Lie derivate of one set of invariant fields with respect to the other is zero.

Here's my reasoning:

<br /> \begin{align*}<br /> (L_{\vec{\xi_A}} \vec{\xi&#039;_B}) \; \underrightarrow{\partial} g<br /> &amp; = \vec{\xi_A} (\vec{\xi&#039;_B} \; \underrightarrow{\partial} g) - \vec{\xi&#039;_B} (\vec{\xi_B} \; \underrightarrow{\partial} g) \\<br /> &amp; = \vec{\xi_A} \; \underrightarrow{\partial} (g T_B) - \vec{\xi&#039;_B} \; \underrightarrow{\partial} (T_A g) \\<br /> &amp; = T_A (g T_B) - (T_A g) T_B \\<br /> &amp; = 0<br /> \end{align}<br />

with the last step following by associativity of the matrix product.

BTW, I'm still mega-worried about this over/under arrow convention, and the way that vectors act on 1-forms and 0-forms; ok, mainly the latter, not so much the former. Also, the argument so far seems to depend upon g being a 0-form, however if instead of using a matrix representation of the T_As we use a bi-vector representation, then g becomes a multi-grade (scalar + bi-vector) object, and then these vector fields don't act on them in the same way. I guess in that case we need to go back to the beginning and redefine the killing vector fields in a different way.
 
Last edited:
  • #87
Don't freak out Joe, everything works just fine. These vectors do not act on scalars or Clifford elements like in some math notation you've seen -- they commute with them. The vectors only act on forms, and the exterior derivative is a special form that acts on other forms as a derivative, including 0-form scalars and Clifford coefficients. So
<br /> \vec{v} \underrightarrow{\partial} g = <br /> v^i \vec{\partial_i} \underrightarrow{dx^j} \partial_j g<br /> = v^i \partial_i g<br />
is the equivalent way to get the derivative of a function or Clifford field along a vector.

Just keep in mind that vector and form basis elements ALWAYS commute with scalars and Clifford basis elements.

Zero is the right answer, for the reason you said. But can you go clean up the arrows above and put in partial derivatives where needed now? You only need to insert two \underrightarrow{\partial}'s, then all your equations are perfect.
 
Last edited:
  • #88
garrett said:
Don't freak out Joe, everything works just fine...

Awww, I like to get a good freak on from time to time o:) Thanks for the clarification.

Zero is the right answer, for the reason you said. But can you go clean up the arrows above and put in partial derivatives where needed now? You only need to insert two \underrightarrow{\partial}'s, then all your equations are perfect.

Ok, so it must be this (with a previous typo in this post fixed),

<br /> \begin{align*}<br /> (L_{\vec{\xi_A}} \vec{\xi&#039;_B}) \; \underrightarrow{\partial} g<br /> &amp; = \vec{\xi_A} \underrightarrow{\partial} (\vec{\xi&#039;_B} \; \underrightarrow{\partial} g) -<br /> \vec{\xi&#039;_B} \underrightarrow{\partial} (\vec{\xi_A} \; \underrightarrow{\partial} g) \\<br /> &amp; = \vec{\xi_A} \; \underrightarrow{\partial} (g T_B) - \vec{\xi&#039;_B} \;<br /> \underrightarrow{\partial} (T_A g) \\<br /> &amp; = T_A (g T_B) - (T_A g) T_B \\<br /> &amp; = 0<br /> \end{align}<br />
 
Last edited:
  • #89
Yep, that's it -- except for the extra B buzzing around, A?

I'm going to go have dinner, then come back and talk about what this zero answer means.
 
  • #90
OK, so at this point we've figured out quite a bit about our group manifold. We started with the three Lie algebra generators, T_A[/tex], and even cleverly identified them as Clifford algebra bivectors. Exponentiating these, multiplied by coordinates, we got all the group elements, parameterized by coordinates. This is our manifold, with each manifold point associated with coordinates. We then were able to calculate two sets of three vector fields over the manifold, one set corresponding to the left action of the generators on the group elements, and the other set corresponding to the right action of the generators. The Lie derivative (commutation) relations between the first set of vector fields gave exactly the structure constants for the Lie algebra, while the Lie derivative relations between the second set gave negative the structure constants. Also, the Lie derivative of an element of the second set with respect to an element from the first set is zero. These two independent sets of vector fields are associated with the group&#039;s symmetries -- and we&#039;ve been calling them Killing vector fields, even though we haven&#039;t said what the metric is. Time to change that!<br /> <br /> Recall that a vector field is Killing (and a &quot;symmetry&quot;) if and only if it is associated with a flow of the manifold points that leaves the geometry unchanged. By geometry here is meant the shape of the manifold, and this is traditionally encoded by a metric. Mathematically, a vector field is Killing iff the Lie derivative of the metric with respect to the vector field is zero. But recall a few posts back in this thread that I would like to use the frame, or set of orthonormal basis vectors, instead of the metric. The relation between frame and metric, in components, was derived as<br /> &lt;br /&gt; g_{ij} = (e_i)^A (e_j)^B \delta_{AB}&lt;br /&gt;<br /> and between orthonormal basis vectors and the inverse metric<br /> &lt;br /&gt; g^{ij} = (e^-_A)^i (e^-_B)^j \delta^{AB}&lt;br /&gt;<br /> An important thing to notice about this relationship is that a set of orthonormal basis vectors gives the same metric as a set that is arbitrarily rotated. So, in terms of the frame, a vector field is Killing if and only if the Lie derivative of the orthonormal basis vectors is given by a rotation:<br /> &lt;br /&gt; {{\cal L}_\vec{\xi}} \, \vec{e_A} = B_C{}^D \vec{e_D}&lt;br /&gt;<br /> with<br /> &lt;br /&gt; B_C{}^D = - B^D{}_C&lt;br /&gt;<br /> antisymmetric in its indices iff \vec{\xi} is Killing. This is even neater using Clifford algebra. Writing the set of three orthonormal basis vectors as a Clifford vector valued vector field,<br /> &lt;br /&gt; \vec{e} = \vec{e_A} \sigma^A = \sigma^A (e^-_A)^i \vec{\partial_i}&lt;br /&gt;<br /> a vector field is Killing iff<br /> &lt;br /&gt; {{\cal L}_\vec{\xi}} \, \vec{e} = B \times \vec{e}&lt;br /&gt;<br /> for some Clifford bivector field, B.<br /> <br /> Now, since we&#039;ve said we have two sets of three vector fields which we&#039;ve said are Killing, we need to pick a frame (and hence a metric) such that we weren&#039;t lying about that! A clear winner leaps out at us. Since, as you found,<br /> &lt;br /&gt; {{\cal L}_\vec{\xi_A}} \, \vec{\xi&amp;#039;_B} = 0&lt;br /&gt;<br /> the best choice for a set of orthonormal basis vectors is simply the set of symmetry vectors that had the wrong sign for the commutation relations between them:<br /> &lt;br /&gt; \vec{e_B} = \vec{\xi&amp;#039;_B} = \vec{\partial_i} \left( \delta_{Bi} \frac{r \cos(r)}{\sin(r)} + x^B x^i ( \frac{1}{r^2} - \frac{\cos(r)}{r \sin(r)} ) - \epsilon_{Bik} x^k \right)&lt;br /&gt;<br /> And, as a bonus, we already know the frame 1-forms corresponding to these vectors, since we calculated them first.<br /> <br /> OK, once you believe all this, which may take a while, I&#039;ll have three &quot;homework&quot; questions for you:<br /> 1) Are the set of three \vec{\xi&amp;#039;_B} also Killing, even though we&#039;ve chosen them as our orthonormal basis vector fields? (Why?)<br /> 2) What is the metric, g_{ij} corresponding to this choice of orhonormal basis vectors?<br /> 3) Would the metric have been different if we had chosen to use \vec{\xi_B} as the orthonormal basis vectors?
 
Last edited:
  • #91
garrett said:
Ah, yes, mathematicians often write a vector operating on a function as \vec{v}f = v^i \partial_i f. I do not write it that way. Instead, I would write the same thing as
<br /> \vec{v} \underrightarrow{\partial} f = v^i \partial_i f<br />
I like to have conservation of arrows in my notation. :)
I am a bit confused. I understand the desire to conserve arrows and the fact that a function is a 0-form but I would have expected you to write this as
<br /> \vec{v} \underrightarrow{ f} = v^i \partial_i f<br />
no?!:confused:

In the way you wrote it, what do you mean by {\vec v} ? Normally one would write {\vec v} = v_i \partial_i but your partial derivatives are part of the 0-form?!?

Thanks
 
  • #92
nrqed said:
I am a bit confused. I understand the desire to conserve arrows and the fact that a function is a 0-form but I would have expected you to write this as
<br /> \vec{v} \underrightarrow{ f} = v^i \partial_i f<br />
no?!:confused:

In the way you wrote it, what do you mean by {\vec v} ? Normally one would write {\vec v} = v_i \partial_i but your partial derivatives are part of the 0-form?!?

Let me give you some of the cast of characters:
coordinate basis vectors:
<br /> \vec{\partial_i}<br />
coordinate basis 1-forms:
<br /> \underrightarrow{dx^i}<br />
partial derivative operator with respect to a coordinate:
<br /> \partial_i<br />

OK, with those guys, we can build vectors:
<br /> \vec{v} = v^i \vec{\partial_i}<br />
forms:
<br /> \underrightarrow{f} = f_i \underrightarrow{dx^i}<br />
and the exterior derivative operator:
<br /> \underrightarrow{\partial} = \underrightarrow{dx^i} \partial_i<br />

There is a contraction rule between basis vectors and basis forms:
<br /> \vec{\partial_i} \underrightarrow{dx^j} = \delta_i^j<br />

That's it!

Now, for some examples. A vector contracted with a 1-form:
<br /> \vec{v} \underrightarrow{f} = v^i f_i<br />
The exterior derivative of a 1-form:
<br /> \underrightarrow{\partial} \underrightarrow{f} = \underrightarrow{dx^i} \underrightarrow{dx^j} \partial_i f_j<br /> = \underrightarrow{\underrightarrow{F}}<br />
And the derivative of a 1-form along a vector, obtained by first contracting the vector with the exterior derivative:
<br /> ( \vec{v} \underrightarrow{\partial} ) \underrightarrow{f}<br /> = ( v^i \partial_i f_j ) \underrightarrow{dx^j}<br />

Happy?
 
Last edited:
  • #93
garrett said:
OK, so at this point we've figured out quite a bit about our group manifold.

... even though we haven't said what the metric is. Time to change that!

Can you explain how to talk about the metric in your notation?

The metric tends to be formulated as, g = g_{ij} dx^i \otimes dx^j, however your \underrightarrow{dx^i} basis elements are antisymmetric, not symmetric. How do you define this?

Mathematically, a vector field is Killing iff the Lie derivative of the metric with respect to the vector field is zero.

Of course, operating on a scalar, the lie derivative is equivalent to the covariant derivative, reducing to the expression I mentioned early on, \xi_{i;j} + \xi_{j;i} = 0. Can you give me some hints as to how to derive the expression for the lie derivative acting on the viel-bein? (We've not mentioned covariant derivatives yet!).
 
  • #94
Taoy said:
Can you explain how to talk about the metric in your notation? The metric tends to be formulated as, g = g_{ij} dx^i \otimes dx^j, however your \underrightarrow{dx^i} basis elements are antisymmetric, not symmetric. How do you define this?

Sure. From my point of view, the vielbein is more fundamental than the metric. The vielbein -- the set of orthonormal vectors defined at each point -- includes information about its orientation and about its scale. The metric is just a convenient way of describing just the scale information. It doesn't, by itself, exist as a well defined geometric object in my way of thinking, but just as a useful bit of scale information related to the vielbein. Here's how it pops up:

In order to compare two vectors at a point, one can map them into a local inertial frame using the frame (inverse vielbein) and then take their dot product. This is where defining the frame as a Clifford vector valued 1-form comes in, as we can write:
<br /> (\vec{u}\underrightarrow{e}) \cdot (\vec{v}\underrightarrow{e})<br /> = u^i (e_i)^\alpha v^j (e_j)^\beta \gamma_\alpha \cdot \gamma_\beta<br /> = u^i v^j \left( (e_i)^\alpha (e_j)^\beta \eta_{\alpha \beta} \right)<br /> = u^i v^j g_{ij}<br />
which shows exactly how the metric pops up.

Of course, operating on a scalar, the lie derivative is equivalent to the covariant derivative, reducing to the expression I mentioned early on, \xi_{i;j} + \xi_{j;i} = 0. Can you give me some hints as to how to derive the expression for the lie derivative acting on the viel-bein? (We've not mentioned covariant derivatives yet!).

Well, the definition of the Lie derivative of a vielbein vector is
<br /> {{\cal L}_\vec{\xi}} \, \vec{e_\alpha}<br /> = (\vec{\xi} \underrightarrow{\partial}) \vec{e_\alpha} - ( \vec{e_\alpha} \underrightarrow{\partial}) \vec{\xi}<br /> = (\xi^i \partial_i (e_\alpha)^j - (e_\alpha)^i \partial_i \xi^j) \vec{\partial_j}<br />
And the Lie derivative of Clifford basis elements is zero. That let's you calculate the Lie derivative of \vec{e}. If you'd like, I could derive why the Lie derivative is what it is, in terms of vector induced flows, but it's kind of involved. I'll work on putting that derivation up on the wiki though.
 
Last edited:
  • #95
garrett said:
Let me give you some of the cast of characters:
coordinate basis vectors:
<br /> \vec{\partial_i}<br />
coordinate basis 1-forms:
<br /> \underrightarrow{dx^i}<br />
partial derivative operator with respect to a coordinate:
<br /> \partial_i<br />

OK, with those guys, we can build vectors:
<br /> \vec{v} = v^i \vec{\partial_i}<br />
forms:
<br /> \underrightarrow{f} = f_i \underrightarrow{dx^i}<br />
and the exterior derivative operator:
<br /> \underrightarrow{\partial} = \underrightarrow{dx^i} \partial_i<br />

There is a contraction rule between basis vectors and basis forms:
<br /> \vec{\partial_i} \underrightarrow{dx^j} = \delta_i^j<br />

That's it!

Now, for some examples. A vector contracted with a 1-form:
<br /> \vec{v} \underrightarrow{f} = v^i f_i<br />
The exterior derivative of a 1-form:
<br /> \underrightarrow{\partial} \underrightarrow{f} = \underrightarrow{dx^i} \underrightarrow{dx^j} \partial_i f_j<br /> = \underrightarrow{\underrightarrow{F}}<br />
And the derivative of a 1-form along a vector, obtained by first contracting the vector with the exterior derivative:
<br /> ( \vec{v} \underrightarrow{\partial} ) \underrightarrow{f}<br /> = ( v^i \partial_i f_j ) \underrightarrow{dx^j}<br />

Happy?

Yes, it makes perfect sense. Sorry, I was not trying to be difficult. It is clear now and it's a nice notation (I had assumed that the single under arrow was for any n-form but now I see that there are n arrows for an n-form). It's the first time that I see explicitly the exterior derivative as being assign a specific symbol like your \underrightarrow{\partial}, in the conventional (but not as clear) notation, the "d" is always applied on something, the "d" is never presented on its own. But I do like your notation much more.

Thank you for the clarification. It's appreciated.

Patrick
 
  • #96
Garrett can you explain please what is the difference between a killing vector field and a killing form.
I mean how to use a killing form to find a killing vector field.
 
  • #97
Mehdi_ said:
Garrett can you explain please what is the difference between a killing vector field and a killing form.
I mean how to use a killing form to find a killing vector field.

They aren't directly related -- just named after the same guy.
A Killing vector field gives a flow on a manifold that preserves its geometric shape.
A Killing form comes from the classifications of Lie groups and their structure constants.
They're sort of related, but not directly, and it would take a lot of explaining and abstraction to get to Killing forms -- so I advise you not to worry about them yet.
 
  • #98
Glad you like it, Patrick. :)
 
  • #99
Hey Joe, you figure out the metric yet?
If not, I'll post it in the morning.
 
  • #100
garrett said:
Hey Joe, you figure out the metric yet?
If not, I'll post it in the morning.


Hey Garrett, no not yet; I'm off to Berlin today for a week (to the Marcel Grossman conference), so I won't get to post anything for a week or so. BUT I'm expecting to have it all worked out by next weekend :). Feel free to post the next bit though if you want; I'll catch up.

BTW, my last question was ill formed. What I meant to ask was how if a vector field is Killing iff the Lie derivative of the metric with respect to the vector field is zero, that leads to the condition on the lie derivative of the vielbein that you quoted. I've scratched my head, but not derived it yet. However, that doesn't mean that I can't derive it! :) (... but it's tricky then a hint would be helpful :).

Joe
 
Back
Top