Explore Geometry of Symmetric Spaces & Lie Groups on PF

  • #31
Hey Mehdi, nice question. Using this angular parameterization, with a constant r, we have a map from S2 into SU(2). When I show the map from SU(2) to SO(3), (rotations) we'll see that this S2 corresponds to the orientation of the plane of the rotation, and the r value corresponds to the rotation amplitude, or angle.
 
Physics news on Phys.org
  • #32
rotations

Alright, we have finally come around to rotations. Let's make a rotation using Clifford algebra. First, what do you get when you cross a vector with a bivector? Starting with an arbitrary vector,
<br /> v = v^1 \sigma_1 + v^2 \sigma_2 + v^3 \sigma_3<br />
and, for example, a "small" bivector in the xy plane,
<br /> B = \epsilon \sigma_{12}<br />
their cross product gives
<br /> v \times B = \epsilon ( v^1 \sigma_1 \times \sigma_{12} + v^2 \sigma_2 \times \sigma_{12} + v^3 \sigma_3 \times \sigma_{12})<br /> <br /> = \epsilon ( v^1 \sigma_2 - v_2 \sigma_1)<br />
This new vector, v \times B, is perpendicular to v, and in the plane of B. This "small" vector is the one that needs to be added to v in order to rotate it a small amount counter-clockwise in the plane of B:
<br /> v&#039; \simeq v + v \times B \simeq (1 + \frac{1}{2} \epsilon \sigma_{12}) v (1 - \frac{1}{2} \epsilon \sigma_{12})<br />
where the "\simeq" holds to first order in \epsilon. Infinitesimal rotations like these can be combined to give a finite rotation,
<br /> v&#039; = \lim_{N \rightarrow \infty} (1+ \frac{1}{N} \frac{1}{2} \theta \sigma_{12}) v ((1- \frac{1}{N} \frac{1}{2} \theta \sigma_{12})<br /> <br /> = e^{\frac{1}{2} \theta \sigma_{12}} v e^{-\frac{1}{2} \theta \sigma_{12}} = U v U^-<br />
using the "limit" definition for the exponential. This is an exact expression for the rotation of a vector by a bivector. In three dimensions an arbitrary bivector, B, can be written as
<br /> B = \theta b<br />
an amplitude, \theta, multiplying a unit bivector encoding the orientation, bb=-1. The exponential can then be written using Joe's expression for exponentiating a bivector:
<br /> U = e^{\frac{1}{2} B} = \cos(\frac{1}{2} \theta) + b \sin(\frac{1}{2} \theta)<br />
And an arbitrary rotation in any plane can be expressed efficiently as v&#039; = UvU^-. For example, for a rotation of an arbitrary vector by B=\theta \sigma_{12}, the result (using some trig identities) is:
<br /> v&#039; = e^{\frac{1}{2} B} v e^{-\frac{1}{2} B}<br /> = (\cos(\frac{1}{2} \theta) + \sigma_{12} \sin(\frac{1}{2} \theta) (v^1 \sigma_1 + v^2 \sigma_2 + v^3 \sigma_3) (\cos(\frac{1}{2} \theta) - \sigma_{12} \sin(\frac{1}{2} \theta)<br />
<br /> = (v^1 \cos(\theta) + v^2 \sin(\theta) ) \sigma_1 + (v^2 \cos(\theta) - v^1 \sin(\theta)) \sigma_2 + v^3 \sigma_3<br />
This is widely considered to be pretty neat, and useful as a general method of expressing and calculating rotations.

Now, we already established that elements of the group SU(2) may be represented as exponentials of bivectors, so these U are SU(2) elements! The "double cover" relationship between SU(2) and rotations (the group SO(3)) is in the expression
<br /> v&#039; = UvU^-<br />
It is the fact that two different SU(2) elements, U and -U, give the same rotation. That's all there is to it.

To be painfully explicit, it is possible to relate all this to rotation matrices. A rotation matrix is a 3x3 special orthogonal matrix that transforms one set of basis vectors into another. This equates to the Clifford way of doing a rotation as:
<br /> \sigma&#039;_i = L_i{}^j \sigma_j = U \sigma_i U^-<br />
For any rotation encoded by U (which, as the exponential of a bivector, also represents an arbitrary SU(2) element), the corresponding rotation matrix elements may be explicitly calculated using the trace as
<br /> L_i{}^j = \left&lt; U \sigma_i U^- \sigma_j \right&gt;<br />

Using Clifford algebra, you think of a rotation as being in a plane (or planes), described by a bivector. This generalizes very nicely to dimensions higher than three, such as for Lorentz transformations and for rotations in Kaluza-Klein theory.

It's a little odd if you haven't seen it before -- any questions?
 
  • #33
Garrett it is beautiful... I have no question... it is well explained and therefore easy to understand.

You have successfully established a relation between SU(2), SO(3), rotation matrix and Clifford algebra Cl_{ 0,2 }(R)... (Spin(3) group is the universal covering group of SO(3) ?!? and an accidental isomorphims with SU(2) and Sp(1) ?!?).

Maybe one day, you could do the same with an other group, let's say, the symplectic group and it's relation to Clifford algebras (using Lagrangians or Hamiltonians to make the examples more explicit)... Garrett…it's only a wish... ;)
 
Last edited:
  • #34
Originally Posted by garrett:
The exponential can then be written using Joe's expression for exponentiating a bivector:
U = e^{\frac{1}{2} B} = \cos(\frac{1}{2} \theta) + b \sin(\frac{1}{2} \theta)
And an arbitrary rotation in any plane can be expressed efficiently as v&#039; = U v U^-.
Can we then say that U is a rotor ?
If U is a rotor, we can then say that this rotor is an element of SU(2) group.
 
Last edited:
  • #35
Yes, you can call it a rotor, but that's kind of an old term. The more modern description is that it's an element of the Spin group, and in this 3D case, Spin(3)=SU(2).

Here's a wikipedia reference (good reading!):
http://en.wikipedia.org/wiki/Spin_group
 
  • #36
Hi Garrett

Can we then say that the quaternion of norm 1 belong to SU(2) group ?
 
  • #37
I know that spinors are related to quaternions... tomorrow I will try to find the link between them...
 
  • #38
I messed up a couple of expressions in the last math post.

First, all incidences of "v \times B" should be "B \times v" with a corresponding change of sign where relevant.

Second, the expression for the limit of many infinitesimal rotations should be
<br /> \lim_{N \rightarrow \infty} \left( 1+ \frac{1}{N} \frac{1}{2} \theta \sigma_{12} \right)^N v \left( 1- \frac{1}{N} \frac{1}{2} \theta \sigma_{12} \right)^N<br />

Apologies.
 
  • #39
Mehdi_ said:
Can we then say that the quaternion of norm 1 belong to SU(2) group?
Yes.

The three basis quaternions are the same as the SU(2) generators, which are the same as the Cl_3 bivectors. The quaternion and/or SU(2) group element, U, is represented by coefficients multiplying these, plus a scalar. And U satisfies UU^\dagger = 1.

I know that spinors are related to quaternions... tomorrow I will try to find the link between them...
Heh. Read my last paper. :)
But a discussion of spinors shouldn't go in this thread (yet). Maybe start another one?
 
Last edited:
  • #40
garrett said:
Great!

The related wiki page is here:

http://deferentialgeometry.org/#[[vector-form algebra]]

Explicitly, every tangent vector gets an arrow over it,
<br /> \vec{v}=v^i \vec{\partial_i}
and every 1-form gets an arrow under it,
\underrightarrow{f} = f_i \underrightarrow{dx^i}
These vectors and forms all anti-commute with one another. And the coordinate vector and form basis elements contract:
<br /> \vec{\partial_i} \underrightarrow{dx^j} = \delta_i^j<br />
so
<br /> \vec{v} \underrightarrow{f} = v^i f_i<br />

Sorry for taking so much time to absorb all of this but although I have heard all the terms mentioned in this thread, I am still learning all that stuff.

A quick question: what do you mean by "the vectors and forms all anticommute with one another"??
Ithought that one could think of "feeding" a vector to a one-form or vice-versa and that the result was the same in both cases. I guess I don't see where anticommutation might arise in that situation. Could you explain this to me?

Thanks again for a great thread!

Patrick
 
  • #41
These vectors and forms all anti-commute with one another should means:
\vec{v}=v^i \vec{\partial_i}=-\vec{\partial_i}v^i
\underrightarrow{f} = f_i \underrightarrow{dx^i}=-\underrightarrow{dx^i}f_i

That means that order is important... it is a non-commutative algebra
 
Last edited:
  • #42
\gamma_1 and \gamma_2 are perpendicular vectors

We start with a vector v equal to \gamma_1 and form another v' by adding a tiny displacement vector in a perpendicular direction :

v=\gamma_1and v&#039;=\gamma_1+\epsilon\gamma_2

and similarly, We start now with a vector v equal to \gamma_2 and form another v' by adding a tiny displacement vector in a perpendicular direction :

v=\gamma_2 and v&#039;=\gamma_2-\epsilon\gamma_1

The minus sign occurs because the bivectors \gamma_1\gamma_2 and \gamma_2\gamma_1 induce rotations in opposite directions

Let's construct a rotor r as follow:
r=vv&#039;=\gamma_1(\gamma_1+\epsilon\gamma_2)=(1+\epsilon\gamma_1\gamma_2)

Let’s see what happens when we use this rotor to rotate something with N copies of an infinitesimal rotation:

v&#039;= {(1+\epsilon\gamma_2\gamma_1)}^Nv{(1+\epsilon\gamma_1\gamma_2)}^N

But in the limit:

{(1+\epsilon\gamma_2\gamma_1)}^N=exp(N\epsilon\gamma_1\gamma_2)
=exp(\theta\gamma_1\gamma_2)=1+\theta\gamma_1\gamma_2-{\frac{1}{2}}{\theta}^2-...

and we find that:

r(\theta)=\cos(\theta)+\gamma_1\gamma_2\sin(\theta)

which is similar to Joe's expression for exponentiating a bivector:

U = e^{\frac{1}{2} B} = \cos(\frac{1}{2} \theta) + b \sin(\frac{1}{2} \theta)

Even if in Joe's expression we have (\frac{1}{2}\theta) the two equation are similar because the rotor angle is always half the rotation...
 
Last edited:
  • #43
Sure Patrick, glad you're liking this thread.

By "the vectors and forms all anticommute with one another" I mean
<br /> \underrightarrow{dx^i} \underrightarrow{dx^j} = - \underrightarrow{dx^j} \underrightarrow{dx^i}<br />
which is the wedge product of two forms, without the wedge written. And
<br /> \vec{\partial_i} \vec{\partial_j} = -<br /> \vec{\partial_j} \vec{\partial_i}<br />
which tangent vectors have to do for contraction with 2-forms to be consistent. And
<br /> \vec{\partial_i} \underrightarrow{dx^j} = -<br /> \underrightarrow{dx^j} \vec{\partial_i} = \delta_i^j<br />
which is an anticommutation rule you can avoid if you always write vectors on the left, but otherwise is necessary for algebraic consistency.

1-form anticommutation is pretty standard, as is vector-form contraction -- often called the vector-form inner product. The vector anticommutation follows from that. And the vector-form anticommutation from that. (Though I haven't seen this done elsewhere.) It makes for a consistant algebra, but it's non-associative for many intermixed vectors and forms, so you need to use parenthesis to enclose the desired contracting elements.
 
Last edited:
  • #44
Mehdi_ said:
These vectors and forms all anti-commute with one another should means:
\vec{v}=v^i \vec{\partial_i}=-\vec{\partial_i}v^i
\underrightarrow{f} = f_i \underrightarrow{dx^i}=-\underrightarrow{dx^i}f_i

That means that order is important... it is a non-commutative algebra

Nope, the v^i and f_i are scalar coefficients -- they always commute with everything. (Err, unless they're Grassmann numbers, but we won't talk about that...)

Mehdi's other post was fine.
 
Last edited:
  • #45
Garrett...oups... that's true...
 
  • #46
garrett said:
Sure Patrick, glad you're liking this thread.

By "the vectors and forms all anticommute with one another" I mean
<br /> \underrightarrow{dx^i} \underrightarrow{dx^j} = - \underrightarrow{dx^j} \underrightarrow{dx^i}<br />
which is the wedge product of two forms, without the wedge written. And
<br /> \vec{\partial_i} \vec{\partial_j} = -<br /> \vec{\partial_j} \vec{\partial_i}<br />
which tangent vectors have to do for contraction with 2-forms to be consistent. And
<br /> \vec{\partial_i} \underrightarrow{dx^j} = -<br /> \underrightarrow{dx^j} \vec{\partial_i} = \delta_i^j<br />
which is an anticommutation rule you can avoid if you always write vectors on the left, but otherwise is necessary for algebraic consistency.

1-form anticommutation is pretty standard, as is vector-form contraction -- often called the vector-form inner product. The vector anticommutation follows from that. And the vector-form anticommutation from that. (Though I haven't seen this done elsewhere.) It makes for a consistant algebra, but it's non-associative for many intermixed vectors and forms, so you need to use parenthesis to enclose the desired contracting elements.
:eek: I had never realized that!

Thank you for explaining this!

For the product of 1-form that's not surprising to me since I would assume a wedge product there.

But a product of vector fields is always understood in differential geometry or is it an added structure? It seems to me that one couls also introduce a symmetric product.. What is the consistency condition that leads to this?

Also, I really did not know that "contracting" a one-form and a vector field depended on the order! I have always seen talk about "feeding a vector to a one-form" and getting a Kronecker delta but I alwyas assumed that one could equally well "feed" the one form to the vector and get the *same* result. I had not realized that there is an extra sign. What is the consistency condition that leads to this?

Sorry for all the questions but one thing that confuses me when learning stuff like this is to differentiate what is imposed as a definition and what follows from consistency. I always wonder if a result follows from the need for consistency between precedent results or if it's a new defnition imposed by hand. But I don't necessarily need to see the complete derivation, if I can only be told "this follows from this and that previous results",then I can work it out myself.

Thank you!
 
  • #47
Certainly. I need to stress this is my own notation, so it is perfectly reasonable to ask me to justify it. Also, it's entirely up to you whether you want to use it -- everything can be done equally well in conventional notation, after translation. ( I just have come to prefer mine. )

The conventional notation for the inner product ( a vector, \vec{v}, and form, f, contracted to give a scalar ) in Frankel and Nakahara etc. is
<br /> i_{\vec{v}} f = f(\vec{v})<br />
which I would write as
<br /> \vec{v} \underrightarrow{f}<br />
I will write the rest of this post using my notation, but you can always write the same thing with "i"'s all over the place and no arrows under forms.

Now, conventionally, there is a rule for the inner product of a vector with a 2-form. For two 1-forms, the distributive rule is
<br /> \vec{a} \left( \underrightarrow{b} \underrightarrow{c} \right)<br /> = \left( \vec{a} \underrightarrow{b} \right) \underrightarrow{c}<br /> - \underrightarrow{b} \left( \vec{a} \underrightarrow{c} \right)<br />
Using this rule, one gets, after multiplying it out:
<br /> \vec{e} \vec{a} \left( \underrightarrow{b} \underrightarrow{c} \right)<br /> = - \vec{a} \vec{e} \left( \underrightarrow{b} \underrightarrow{c} \right)<br />
which is the basis for my assertion that
<br /> \vec{e} \vec{a} = - \vec{a} \vec{e}<br />
This sort of "tangent two vector" I like to think of as a loop, but that's just me being a physicist. ;)

So, now for the vector-form anti-commutation. Once again, keep in mind that you can do everything without ever contracting a vector from the right to a form -- this is just something I can do for fun. But, if you're going to do it, this expression should hold regardless of commutation or anti-commutation:
<br /> \vec{a} \left( \underrightarrow{b} \underrightarrow{c} \right)<br /> = \left( \underrightarrow{b} \underrightarrow{c} \right) \vec{a}<br />
and, analogously with the original distribution rule, that should equal:
<br /> = \underrightarrow{b} \left( \underrightarrow{c} \vec{a} \right)<br /> - \left( \underrightarrow{b} \vec{a} \right) \underrightarrow{c}<br />
Comparing that with the result of the original distribution rule shows that we must have
<br /> \underrightarrow{b} \vec{a} = - \vec{a} \underrightarrow{b}<br />
for all the equalities to hold true, since a vector contracted with a 1-form is a scalar and commutes with the remaining 1-form.

It won't hurt me if you don't like this notation. But do tell me if you actually see something wrong with it!
 
Last edited:
  • #48
garrett said:
By "the vectors and forms all anticommute with one another" I mean
<br /> \underrightarrow{dx^i} \underrightarrow{dx^j} = - \underrightarrow{dx^j} \underrightarrow{dx^i}<br />
which is the wedge product of two forms, without the wedge written. And
<br /> \vec{\partial_i} \vec{\partial_j} = -<br /> \vec{\partial_j} \vec{\partial_i}<br />
which tangent vectors have to do for contraction with 2-forms to be consistent. And
<br /> \vec{\partial_i} \underrightarrow{dx^j} = -<br /> \underrightarrow{dx^j} \vec{\partial_i} = \delta_i^j<br />
which is an anticommutation rule you can avoid if you always write vectors on the left, but otherwise is necessary for algebraic consistency.

Hi Garrett, I'm a bit confused about this notation. What kind of product are you using here, and are these really vectors? How can we make this notation compatible with the geometric product between vectors?

Oh, wait, I guess that you're just making the assumption that both the vector and the co-vector basis are orthogonal.

I'm reading that your \vec{\partial_i} is a vector such that \vec{\partial_i} .\vec{\partial_j} = \delta_{ij} | \vec{\partial_i} |^2. Is that right?
 
Last edited:
  • #49
The algebra of vectors and forms at a manifold point, spanned by the coordinate basis elements \vec{\partial_i} and \underrightarrow{dx^i}, are completely independent from the algebra of Clifford elements, spanned by \gamma_\alpha, or, if you like, they're independent of all Lie algebra elements. By the algebra being independent, I mean that all elements commute.

For example, when we calculated the derivative of a group element (to get the Killing fields), we were calculating the coefficients of a Lie algebra valued 1-form:
<br /> \underrightarrow{d} g = \underrightarrow{dx^i} G_i{}^A T_A<br />
The two sets of basis elements, \underrightarrow{dx^i} and T_A, live in two separate algebras.

The vector and form elements don't have a dot product, and I will never associate one with them. Some do, and call this a metric, but things work much better if you work with Clifford algebra valued forms, and use a Clifford dot product.

I might as well describe how this works...
 
  • #50
The link to the wiki notes describing the frame and metric is:

http://deferentialgeometry.org/#frame metric

but I'll cut and paste the main bits here.

Physically, at every manifold point a frame encodes a map from tangent vectors to vectors in a rest frame. It is very useful to employ the Clifford basis vectors as the fundamental geometric basis vector elements of this rest frame. The ''frame'', then, is a map from the tangent bundle to the Clifford bundle -- a map from tangent vectors to Clifford vectors -- and written as
<br /> \underrightarrow{e} = \underrightarrow{e^\alpha} \gamma_\alpha = \underrightarrow{dx^i} \left( e_i \right)^\alpha \gamma_\alpha<br />
It is a Clifford vector valued 1-form. Using the frame, any tangent vector, $\vec{v}$, on the manifold may be mapped to its corresponding Clifford vector,
<br /> \vec{v} \underrightarrow{e} = v^i \vec{\partial_i} \underrightarrow{dx^j} \left( e_j \right)^\alpha \gamma_\alpha = v^i \left( e_i \right)^\alpha \gamma_\alpha = v^\alpha \gamma_\alpha = v<br />
This frame includes the geometric information usually attributed to a metric. Here, we can compute the scalar product of two tangent vectors at a manifold point using the frame and the Clifford dot product:
<br /> \left( \vec{u} \underrightarrow{e} \right) \cdot \left( \vec{v} \underrightarrow{e} \right) <br /> = u^\alpha \gamma_\alpha \cdot v^\beta \gamma_\beta<br /> = u^\alpha v^\beta \eta_{\alpha \beta} <br /> = u^i \left( e_i \right)^\alpha v^j \left( e_j \right)^\beta \eta_{\alpha \beta}<br /> = u^i v^j g_{ij} <br />
with the use of frame coefficients and the Minkowski metric replacing the use of a metric if desired. Using component indices, the ''metric matrix'' is
<br /> g_{ij} = \left( e_i \right)^\alpha \left( e_j \right)^\beta \eta_{\alpha \beta}<br />

Using Clifford valued forms is VERY powerful -- we can use them to describe every field and geometry in physics.
 
  • #51
garrett said:
The algebra of vectors and forms at a manifold point, spanned by the coordinate basis elements \vec{\partial_i} and \underrightarrow{dx^i}, are completely independent from the algebra of Clifford elements, spanned by \gamma_\alpha, or, if you like, they're independent of all Lie algebra elements. By the algebra being independent, I mean that all elements commute.

Forget the lie algebra for the moment. I'm talking about the basis elements \partial_i \equiv \frac{d}{dx^i} and their dual one-forms. In your notation you put an over arrow over the top indicating that we are dealing with a complete vector, i.e. e_i \equiv \vec{\partial_i}. You then said that they obey an anti-commutation rule: e_i e_j = -e_j e_i.

So, my question was about the kind of product that you are using between these elements. In general the product of two vectors carries a symmetric and an antisymmetric part: e_i e_j = e_i \cdot e_j + e_i \wedge e_j, and it is only the antisymmetric part which anti-commutes. However if you are explicitly working in an orthonormal basis they what you say is correct, unless i=j in which case the two commute.
 
  • #52
garrett said:
The expression you calculated,
<br /> g(x) = e^{x^i T_i} = \cos(r) + x^i T_i \frac{\sin(r)}{r}<br />
is a perfectly valid element of SU(2) for all values of x. Go ahead and multiply it times its Hermitian conjugate and you'll get precisely 1.

Sure I get that, but the series expansions we use are only valid for small x, for instance substitute 4\pi into the series expansion and it doesn't work anymore...

Mehdi_ said:
It is related to the condition, {({x^1})^2 + ({x^2})^2+ ({x^3})^2}=1

Whilst we're here, where does the condition come from? I thought that g g^- might impose some condition on the x's, but it doesn't. Where does it come from? :)
 
  • #53
garrett said:
Because I missed that term! You're right, I thought those would all drop out, but they don't -- one of them does survive. ( By the way, becuase of the way I defined <> with a half in it, it's &lt; T_i T_j T_k &gt; = \epsilon_{ijk} ) So, the correct expression for the inverse Killing vector field should be
<br /> \xi^-_i{}^B = - &lt; \left( (T_i - x^i) \frac{\sin(r)}{r} + x^i x^j T_j ( \frac{\cos(r)}{r^2} - \frac{\sin(r)}{r^3}) \right) \left( \cos(r) - x^k T_k \frac{\sin(r)}{r} \right) T_B &gt;<br />
<br /> = \delta_{iB} \frac{\sin(r)\cos(r)}{r} + x^i x^B ( \frac{1}{r^2} - \frac{\sin(r)\cos(r)}{r^3} ) + \epsilon_{ikB} x^k \frac{\sin^2(r)}{r^2}<br />

What happened to the x^i x^j x^k \epsilon_{jkB} (\cos(r)/r^2 - \sin^2(r)/r^4) term?

p.s. it looks like the right-invariant vectors are just minus the left-invariant ones.
 
Last edited:
  • #54
garrett said:
<br /> \underrightarrow{e} = \underrightarrow{e^\alpha} \gamma_\alpha = \underrightarrow{dx^i} \left( e_i \right)^\alpha \gamma_\alpha<br />

What kind of object is e_\alpha ?, and what kind of object is \gamma_\alpha ?
Are you using upper and lower arrows to purely signify differential geometry objects? Why not arrows on the gamma too; I take it that this is a vector (as apposed to a dual vector)?
 
Last edited:
  • #55
The e_\alpha are the "legs" of the vierbien or frame; four orthonormal vectors based at a typical point of the manifold. I think the \gamma_\alpha are just multipliers (bad choice of notation; they look too d*mn much like Dirac matrices).
 
  • #56
Taoy said:
I'm talking about the basis elements \partial_i \equiv \frac{d}{dx^i} and their dual one-forms. In your notation you put an over arrow over the top indicating that we are dealing with a complete vector, i.e. e_i \equiv \vec{\partial_i}. You then said that they obey an anti-commutation rule: e_i e_j = -e_j e_i.

So, my question was about the kind of product that you are using between these elements. In general the product of two vectors carries a symmetric and an antisymmetric part: e_i e_j = e_i \cdot e_j + e_i \wedge e_j, and it is only the antisymmetric part which anti-commutes. However if you are explicitly working in an orthonormal basis they what you say is correct, unless i=j in which case the two commute.

My justification for creating this algebra in which tangent vectors anti-commute is this: when you contract two tangent vectors with a 2-form, the sign changes depending on the order you do the contraction:
<br /> \vec{a} \vec{b} \left( \underrightarrow{c} \underrightarrow{f} \right)<br /> = - \vec{b} \vec{a} \left( \underrightarrow{c} \underrightarrow{f} \right)<br />
This fact is standard differential geometry for the inner product of two tangent vectors with a 2-form. I merely elevate this fact to create an algebra out of it, and it motivates my notation. Since the two vectors are contracting with a 2-form, which is anti-symmetric, this "product" of two vectors is also necessarily anti-symmetric. If you like, you need not even consider it a product -- just two tangent vectors being fed to a 2-form in succession. :) That is the conventional interpretation.
 
  • #57
Taoy said:
Sure I get that, but the series expansions we use are only valid for small x, for instance substitute 4\pi into the series expansion and it doesn't work anymore...

I think the series expansion for the exponential is an exact equality as long as we keep all terms in the infinite series, which we do. I think it's correct for 4pi, though these x's should be periodic variables, inside the range 0 to 2pi.

Whilst we're here, where does the condition come from? I thought that g g^- might impose some condition on the x's, but it doesn't. Where does it come from? :)

There is no restriction like that on the x coordinates -- best to forget he said that. (I believe he was making an analogy at the time.)
 
  • #58
Taoy said:
What kind of object is e_\alpha ?, and what kind of object is \gamma_\alpha ?
Are you using upper and lower arrows to purely signify differential geometry objects? Why not arrows on the gamma too; I take it that this is a vector (as apposed to a dual vector)?

<br /> \underrightarrow{e^\alpha} = \underrightarrow{dx^i} \left( e_i \right)^\alpha<br />
is one of the orthonormal 1-form basis elements (indexed by \alpha), dual to the corresponding member of the basis of orthonormal tangent vectors.
<br /> \left( e_i \right)^\alpha<br />
are the frame coefficients (aka vielbein coefficients).

<br /> \gamma_\alpha<br />
is one of the Clifford algebra basis vectors.

Yes, I put arrows over tangent vectors, arrows under forms, and no arrows under or over coefficients or Lie algebra or Clifford algebra elements such as \gamma_\alpha . The number of arrows in an expression is "conserved" -- with upper arrows cancelling lower arrows, via vector-form contraction. If some object has a coordinate basis 1-form as part of it, which has an under arrow, then that object also gets an under arrow.
 
Last edited:
  • #59
Hi SA!

Have you looked around the new wiki yet? It was somewhat inspired by some comments we exchanged in another forum. :)

selfAdjoint said:
The e_\alpha are the "legs" of the vierbien or frame; four orthonormal vectors based at a typical point of the manifold.

Yes, but I'm careful to distinguish the vierbein and inverse vierbein, using arrow decorations. The orthonormal basis vectors are
<br /> \vec{e_\alpha} = \left(e^-_\alpha\right)^i \vec{\partial_i}<br />
while the frame, or vierbein, 1-forms are
<br /> \underrightarrow{e^\alpha} = \left(e_i\right)^\alpha \underrightarrow{dx^i} <br />
They satisfy
<br /> \vec{e_\alpha} \underrightarrow{e^\beta}<br /> = \left(e^-_\alpha\right)^i \vec{\partial_i} \left(e_j\right)^\beta \underrightarrow{dx^j}<br /> = \left(e^-_\alpha\right)^i \left(e_j\right)^\beta \delta_i^j<br /> = \left(e^-_\alpha \right)^i \left(e_i\right)^\beta<br /> = \delta_\beta^\alpha<br />

I think the \gamma_\alpha are just multipliers (bad choice of notation; they look too d*mn much like Dirac matrices).

It is a great choice of notation because they're Clifford vectors, which ARE represented by Dirac matrices. :) The same way SU(2) generators are represented by i times the Pauli matrices. You will do perfectly well thinking of \gamma_\alpha as Dirac matrices if you like. (But one doesn't need to -- the same way one can talk about the su(2) Lie algebra without explicitly using Pauli matrices.)

Good to see you over here.
 
  • #60
Taoy said:
What happened to the x^i x^j x^k \epsilon_{jkB} (\cos(r)/r^2 - \sin^2(r)/r^4) term?

It's zero.
x^j x^k \epsilon_{jkB} = 0

p.s. it looks like the right-invariant vectors are just minus the left-invariant ones.

I'll go check.
 

Similar threads

  • · Replies 1 ·
Replies
1
Views
4K
  • · Replies 5 ·
Replies
5
Views
3K
Replies
7
Views
3K
  • · Replies 5 ·
Replies
5
Views
2K
  • · Replies 2 ·
Replies
2
Views
2K
  • · Replies 6 ·
Replies
6
Views
2K
Replies
26
Views
5K
  • Poll Poll
  • · Replies 1 ·
Replies
1
Views
4K
  • · Replies 4 ·
Replies
4
Views
2K
Replies
15
Views
2K