Explore Geometry of Symmetric Spaces & Lie Groups on PF

garrett
Science Advisor
Insights Author
Gold Member
Messages
412
Reaction score
50
A few friends have expressed an interest in exploring the geometry of symmetric spaces and Lie groups as they appear in several approaches to describing our universe. Rather than do this over email, I've decided to bring the discussion to PF, where we may draw from the combined wisdom of its denizens.

Whenever possible, I will be adding and referring to material on my related personal research wiki:

http://deferentialgeometry.org

The meta-idea is to have a linear discussion and development (kind of a mini-course) here on PF, while building up the wiki as a reference and personal research tool for this and other physics inquiries. This will provide background material and (hopefully) enforce a consistent mathematical notation for the discussion. I'm hoping this dual resource use will provide the best of both mediums.

The subjects I/we would like to cover in this thread include:

Lie algebra generators, T_A, (using su(2) as simplest nontrivial example)
(matrix representation, Clifford bivectors, or other Clifford elements)
structure coefficients, (maybe Lie algebra roots, weights, and classification)
exponentiation, g = exp(x^A T_A), giving Lie group elements (SU(2) example)
coordinate change, double covers (SU(2) vs SO(3))
symmetry: Killing vector fields related to generator action
local Lie group manifold geometry -- frame, connection, and curvature
symmetric spaces
Kaluza-Klein theory
appearance and incorporation of Higgs scalars
Peter-Weyl theorem and its use for calculating harmonics

And wherever else the discussion takes us. I'd like things to be (perhaps painfully) specific and pedantic -- relying on explicit examples. I'd like to mostly play with SU(2) and SU(3) as the simplest non-trivial and practical examples. What I'm after is to fully describe these groups as manifolds in terms of their local geometry, symmetries, global geometry, harmonics, etc. And show how they can be incorporated into Kaluza-Klein theory.

I'll usually ask questions at the end of posts. Sometimes I'll know the answer, and sometimes I won't. These will either serve as "homework" (I'll wait 'till someone (PF'ers welcome) answers correctly before proceeding) or as open questions hopefully leading to me learning stuff. (If you want to play, it will help if you have Mathematica or Maple available to use -- or it may be possible to do things the hard way.) I'll also happily answer questions (or meta-questions) related to the posts, probably with references to the wiki.

I'm not sure exactly where this will go or how it will evolve as a discussion, but I thought it would be fun to try here on PF. Now I need to add the first post to this zero-eth one...
 
Physics news on Phys.org
(1) Lie algebra to Lie group manifold

Look here for related background material:

http://deferentialgeometry.org/#[[Lie group]]

A Lie group, in contrast to any old group, is also a manifold. This manifold can be given a metric, and hence a geometry, such that the flows induced by the Lie algebra generators corresponds to Killing vector fields. It will be good to work this out explicitly for a specific example.

The three Lie algebra generators for su(2) may be represented by 2x2 traceless anti-Hermitian matrices related to the Pauli matrices,
<br /> \begin{array}{ccc}<br /> T_1 = i \sigma_{1}^{P} = \left[\begin{array}{cc}<br /> 0 &amp; i\\<br /> i &amp; 0\end{array}\right] &amp;<br /> T_2 = i \sigma_{2}^{P}=\left[\begin{array}{cc}<br /> 0 &amp; 1\\<br /> -1 &amp; 0\end{array}\right] &amp;<br /> T_3 = i \sigma_{3}^{P}=\left[\begin{array}{cc}<br /> i &amp; 0\\<br /> 0 &amp; -i \end{array}\right]\end{array}<br />
From the resulting multiplicative relation,
<br /> T_A \times T_B = \frac{1}{2} \left( T_A T_B - T_B T_A \right) = - \epsilon_{ABC} T_C<br />
the structure coefficients for this Lie algebra are equal to minus the permutation symbol, C_{AB}{}^C= -\epsilon_{ABC}. Also, the trace of two multiplied su(2) generators provides a useful orthogonality relation,
<br /> \left&lt; T_A T_B \right&gt; = \frac{1}{2} Tr(T_A T_B) = - \delta_{AB}<br />

Near the identity, elements of a Lie group can be approximately represented using coordinates multiplying the corresponding Lie algebra generators,
<br /> g \simeq 1 + x^i T_i = 1 + T<br />
In which
<br /> T = x^i T_i = \left[\begin{array}{cc}<br /> i x^3 &amp; i x^1 + x^2\\<br /> i x^1 - x^2 &amp; -i x^3\end{array}\right]<br />
is a general Lie algebra element labeled by coordinates, x^i. In fact, for SU(2), all group elements can be exactly represented by exponentiating Lie algebra elements,
<br /> g = e^T = 1 + T + \frac{1}{2!} T^2 + \frac{1}{3!} T^3 + ...<br />
This gives all g as 2x2 coordinatized unitary matrices with unit determinant.

The first "homework" question is:

What is this g matrix, explicitly, in terms of these coordinates?

Some hints:

Define a new quantity,
r = \sqrt{(x^1)^2+(x^2)^2+(x^3)^2}
What do you get for arbitrary powers of T?
Use the series expansions for sin and cos of r.
Write the answer in terms of sin and cos of r, and T.
 
Ok, I'm going to start to digest this, piece by piece :). (This could get messy if we don't have sub-threads ;o) ).

Flows and Killing vector fields.

I've not seen this explicitly. When I first came across group manifolds and constructed metrics on them it was in terms of the left-invariant one-forms,
\lambda_a T_a = g^{-1} dg;
I guess that formally these are referred to as Maurer-Cartan forms. For an N dimensional group there are N of these which form a basis for the manifold,
ds^2 = \sum_i (\lambda_i)^2.
There should also be some vectors dual to these one-forms; how do these relate to the killing vectors fields?

In general there could be up to N(N+1)/2 killing vectors (or is that killing fields?), which come from the infinitessimal isometries of the metric
\xi_{a;b} + \xi_{b;a} = 0
whereas there are only ever going to be N one-forms. :/
 
Last edited:
Hi Joe, welcome to Physics Forums.

I was going to get into Killing vectors -- just as soon as someone writes down exactly what g is...

The key expression for calculating the Killing vector fields is going to be:
<br /> \vec{\xi_A} \underrightarrow{d} g = T_A g<br />
This expresses the fact that the left action of the Lie algebra generator, T_A, on group elements is equal to the flow induced on the group manifold by the corresponding Killing vector field, \vec{\xi_A}. Once we know g in terms of coordinates, we can calculate its derivatives and inverse and find \vec{\xi_A} explicitly. I'll go ahead and do that as soon as someone writes down what g is, which should be easy if you play with it for a few minutes.

(If you'd rather have me write out the calculations, instead of tackling illustrative "homework" problems, let me know and I'll just do that.)

We will talk about symmetries of our group manifold. Typically, a group manifold is of higher dimension than it needs to be to have the symmetries corresponding to its Lie algebra. You can reduce this "waste" by "dividing" by proper subgroups to get a "symmetric space." We'll do all this. :)
 
Excellent news. I'm working on the form of g right now; I believe I've just seen the trick - even powers of T appear to have a nice form :).

I'm happy to work through the examples for the time being; there's nothing like doing it to learn it.

In the mean time could you perhaps clarify your use of the upper and lower arrows, I can guess their meaning, but it doesn't hurt to be explicit.
 
Last edited:
Great!

The related wiki page is here:

http://deferentialgeometry.org/#[[vector-form algebra]]

Explicitly, every tangent vector gets an arrow over it,
<br /> \vec{v}=v^i \vec{\partial_i}
and every 1-form gets an arrow under it,
\underrightarrow{f} = f_i \underrightarrow{dx^i}
These vectors and forms all anti-commute with one another. And the coordinate vector and form basis elements contract:
<br /> \vec{\partial_i} \underrightarrow{dx^j} = \delta_i^j<br />
so
<br /> \vec{v} \underrightarrow{f} = v^i f_i<br />
And, in the expression I wrote in the post above,
<br /> \vec{\xi_A} \underrightarrow{d} g = \xi_A{}^i \partial_i g<br />
in which \partial_i is the partial derivative with respect to the x^i coordinate.

The notation is slightly nonstandard, but looks good and works very well, even when extended to vectors and forms of higher order.
 
Ok, here's the answer to the "homework".

We are computing the explicit form of the group element for SU(2) in terms of the generators T_a in the given representation.

We use the power series expansions the for sine and cosine of r,

<br /> \begin{align*}<br /> \cos(r) &amp; = 1 - \frac{1}{2!}r^2 + \frac{1}{4!}r^4 - \frac{1}{6!}r^6 + \dots \\<br /> \sin(r) &amp; = r - \frac{1}{3!}r^3 + \frac{1}{5!}r^5 - \frac{1}{7!}r^7 + \dots<br /> \end{align*}<br />

and for the exponential of the matrix T:

<br /> \begin{align*}<br /> e^T &amp; = I + T + \frac{1}{2!}T^2 + \frac{1}{3!}T^3 + \frac{1}{4!}T^4 + \frac{1}{5!}T^5 + \dots \\<br /> &amp; = \left(I + \frac{1}{2!}T^2 + \frac{1}{4!}T^4 + \dots\right) +<br /> \left(T + \frac{1}{3!}T^3 + \frac{1}{5!}T^5 + \dots \right)<br /> \end{align*}<br />

where I is the identity matrix.

We observe that T.T = -r^2 I, and that therefore even powers of T take the form

<br /> T^{2n} = (-1)^n r^{2n} I.<br />

We can substitute this back into the expansion for the exponential to obtain:

<br /> \begin{align*}<br /> e^T<br /> &amp; = I \left(1 - \frac{1}{2!}r^2 + \frac{1}{4!}r^4 - \frac{1}{6!}r^6 + \dots\right) +<br /> T \left(1 - \frac{1}{3!}r^2 + \frac{1}{5!}r^4 - \frac{1}{7!}r^6 + \dots \right) \\<br /> &amp; = I \cos(r) + \frac{1}{r} T \sin(r)<br /> \end{align*}<br />
 
Last edited:
(2) Killing vector fields

Exactly right. So, the matrix expression for a SU(2) element as a function of SU(2) manifold coordinates is
<br /> g(x) = e^{x^i T_i} = \cos(r) + x^i T_i \frac{\sin(r)}{r}<br />
Since it's a SU(2) element, it has unit determinant and its inverse is its Hermitian conjugate:
<br /> g^- = \cos(r) - x^i T_i \frac{\sin(r)}{r}<br />

The next thing is to understand the symmetries of the manifold. We can associate a symmetry, or Killing vector field, \vec{\xi_A}, with the flow induced by each Lie algebra generator acting from the left:
<br /> \xi_A{}^i \partial_i g = T_A g<br />
There are also Killing vector fields associated with generators acting from the right:
<br /> \xi&#039;_A{}^i \partial_i g = g T_A<br />
Notice that the Lie algebra is necessarily the same as the left Killing vector / symmetry algebra under the Lie derivative:
<br /> (T_A T_B - T_B T_A) g = ( \xi_A{}^i \partial_i \xi_B{}^j \partial_j - \xi_B{}^i \partial_i \xi_A{}^j \partial_j ) g<br />
<br /> = C_{ABC} T_C g = ( L_{\vec{\xi_A}} \vec{\xi_B} ) \underrightarrow{d} g = C_{ABC} \vec{\xi_C} \underrightarrow{d} g<br />
The sign of the structure coefficients swaps for the "right acting" Killing vector field algebra.

Now I'll go ahead and calculate the set of three "left acting" Killing vector fields over the group manifold. Multiplying the symmetry equation by the inverse group element gives:
<br /> \xi_A{}^i ( \partial_i g ) g^- = T_A<br />
The three Killing vector fields each have three components, so \xi_A{}^i is a square matrix that can be inverted and multiplied into give
<br /> ( \partial_i g ) g^- = \xi^-_i{}^A T_A<br />
(Note: If we consider this as an equation relating Lie algebra valued 1-forms, it's
<br /> ( \underrightarrow{d} g ) g^- = \underrightarrow{\xi^-}^A T_A<br />
which we'll use later.) We'll next mutliply both sides by T_B and use the orthogonality of our Pauli matrix generators under the matrix trace to get the inverse Killing vector matrix all by itself on one side:
<br /> \xi^-_i{}^B = - \xi^-_i{}^A &lt; T_A T_B&gt; = - &lt; ( \partial_i g ) g^- T_B &gt; <br />
So now we can just calculate that out explicitly, which is made easy by the nice form of g you found:
<br /> \xi^-_i{}^B = - &lt; \left( (T_i - x^i) \frac{\sin(r)}{r} + x^i x^j T_j ( \frac{\cos(r)}{r^2} - \frac{\sin(r)}{r^3}) \right) \left( \cos(r) - x^k T_k \frac{\sin(r)}{r} \right) T_B &gt;<br />
The Pauli matrices are traceless, so only a few terms will survive the trace, with the generator orthogonality under the trace used again to give
<br /> \xi^-_i{}^B = \delta_{iB} \frac{\sin(r)\cos(r)}{r} + x^i x^B ( \frac{1}{r^2} - \frac{\sin(r)\cos(r)}{r^3} )<br />
Inverting this matrix (ow, my head! Mathematica helped.) gives the matrix of Killing vector fields over the SU(2) manifold:
<br /> \xi_A{}^i = \delta_{iA} \frac{r}{\sin(r)\cos(r)} + x^A x^i ( \frac{1}{r^2} - \frac{1}{r \sin(r) \cos(r)} )<br />
These are the components of the three Killing vector fields over the group manifold associated with the left action of Lie algebra generators on group elements.

Something interesting to note: In this whole post, we never had to use the matrix representation of the generators -- all we needed were the commutation relations and orthogonality under the trace. In fact, if the generators, T_A are thought of as Clifford algebra bivectors, everything we've done works out exactly the same way, without ever looking at a Pauli matrix explicitly. The trace operator, &lt;&gt;, is the same (up to a multiplicative factor equal to the matrix dimension) as the Clifford algebra "scalar part" operator. In the next post I can talk about this Clifford algebra stuff (and rotations and double covers) or go on to talk about the frame, metric, and connection on the group manifold. I'll get to the Clifford algebra stuff soon anyway, but it's your choice what we do next. Clifford and rotations -- or metric, frame, and connection?

So... the next "homework" is...
1) Make sure I didn't mess this calculation up anywhere. ;)
2) What, explicitly, are the other three Killing vector fields, \vec{\xi&#039;_A} associated with the right action of the generators?
3) What would you like to see next: Clifford algebra and rotations, or the group manifold metric and geometry?

(This post represents a bit of work (my whole evening, in fact) so feel free to ask questions about it for a bit. And if we can't get the "right corresponding" Killing fields tomorrow, I'll try to do it so we can move on.)
 
Last edited:
Where does the expression \xi_A{}^i \partial_i g = T_A g [/tex] come from, i.e. how do we derive it? Also, there appears to be an implicit assumption that there are as many vector fields as there are generators, that is i and A run over the same indexes. Why is this obvious? In general there could be up to N(N+1)/2 killing vector fields, whereas here we have exactly N.<br /> <br /> Also, it looks to me, in Geometric Algebra language, that the left hand side operator is something like: \xi_A \cdot \nabla which is a scalar, however we know that the T_A is going to be a bi-vector... what&#039;s going on here?<br /> <br /> Other thoughts that come to mind are:<br /> <br /> <ul> <li data-xf-list-type="ul">Group element parameters, vs coordinates.<br /> The x^is parametrise the elements of the group, and we can obviously define these killing fields in terms of them, and so they can also be considered coordinates on the manifold.</li> <li data-xf-list-type="ul">r is obviously the length of a vector, where the x^i are coordinates in an orthonormal frame. However the appearance of the sine and cosine of this length is a mystery to me, raising the question &quot;When is the length of a vector the same as an angle?&quot;.</li> <li data-xf-list-type="ul">I&#039;m looking forward to seeing how this is manifestly the surface of a 3-sphere.</li> </ul>
 
Last edited:
  • #10
garrett said:
The three Killing vector fields each have three components, so \xi_A{}^i is a square matrix that can be inverted and multiplied into give
<br /> ( \partial_i g ) g^- = \xi^-_i{}^A T_A<br />
We'll next mutliply both sides by T_B and use the orthogonality of our Pauli matrix generators under the matrix trace to get the inverse Killing vector matrix all by itself on one side:
<br /> \xi^-_i{}^B = - \xi^-_i{}^A &lt; T_A T_B&gt; = - &lt; ( \partial_i g ) g^- T_B &gt; <br />

Hmm. I'm not sure about this step. Once you have multiplied to the right by T_B on each side, if you want to take the trace you have to trace the whole thing, i.e. the LHS is:

<br /> \begin{align*}<br /> ( \partial_i g ) g^- &amp; = \xi^-_i{}^A T_A \\<br /> ( \partial_i g ) g^- T_B &amp; = \xi^-_i{}^A T_A T_B \\<br /> &lt; ( \partial_i g ) g^- T_B&gt; &amp; = &lt;\xi^-_i{}^A T_A T_B&gt;<br /> \end{align*}<br />

What step to do use to remove the \xi^-_i{}^A matrix from the trace on the right hand side so that you can use the orthogonality condition?

Oh, actually I see it. \xi^-_i{}^A is not a matrix in this expression, it's just a scalar, and so it can be pulled out the front.
 
Last edited:
  • #11
Good questions.

A symmetry is a map from the manifold to itself. A continuos map, or flow, can be visualized as moving the manifold coordinates by a little bit:
<br /> x^i \rightarrow x&#039;^i = x^i + \epsilon^A \xi_A^i<br />
in which \vec{\xi} = \epsilon^A \vec{\xi_A} is a vector field on the manifold, with "small" parameters, \epsilon_A. Under a flow (also known as a diffeomorphism), a function of manifold points, such as the group element g(x), changes as:
<br /> g(x) \rightarrow g&#039;(x) = g(x + \epsilon^A \xi_A) \simeq g(x) + \epsilon^A \xi_A^i \partial_i g(x)<br />
to first order via Taylor expansion. Now, there is also a map on group elements induced by the Lie algebra generators:
<br /> g \rightarrow g&#039; \simeq (1 + \epsilon^A T_A) g = g + \epsilon^A T_A g<br />
(and another map for the group element acting from the other side)
The symmetry relation we want comes from equating the maps induced by the Lie algebra generators with the corresponding diffeomorphisms,
<br /> \xi_A^i \partial_i g = T_A g<br />
Wha-la.

Now, as for the Clifford algebra question: The group element is an exponential of a bivector, g = e^T, so it is a mixed, even graded multivector. Taking its derivative "brings down" a bivector, so there is no grade inconsistency. Grade consistency is a good thing to keep an eye on though, and we'll use it later.

Did the rest of the previous post make sense?
 
Last edited:
  • #12
Yes, exactly, \xi^-_i{}^A is a bunch of scalars labeled by indices, which each run from 1 to 3. You can then think of that as a set of three 1-forms, or as a 3x3 "matrix" -- but not a matrix in the same algebra as g.

By the way, the g^- = g^{-1} notation, indicating an inverse element, comes from Donald Knuth -- I also like it so I stole it.
 
Last edited:
  • #13
Answering your other questions:

For now, r = \sqrt{x1^2 + x2^2 + x3^2} is best thought of as just a notational convenience.

We should see the relationship to spheres when we establish the geometry of the group manifold.

Yes, the group parameters are the group manifold coordinates. The A and i indices are, for now, in the same class and are interchangeable. This will be different when we investigate symmetric spaces.
 
Last edited:
  • #14
garrett said:
A symmetry is a map from the manifold to itself. A continuos map, or flow, can be visualized as moving the manifold coordinates by a little bit:
<br /> x^i \rightarrow x&#039;^i = x^i + \epsilon^A \xi_A^i<br />
in which \vec{\xi} = \epsilon^A \vec{\xi_A} is a vector field on the manifold, with "small" parameters, \epsilon_A.

Ok, I get this. Another way of getting at it is to study what happens to the components of the metric g_{ij}(x) = g&#039;_{ij}(x&#039;) (which we've not come to yet, but I'll mention it anyway), as a function of x is transformed into a different set of coordinates (or basis), via x -&gt; x&#039; = x + \epsilion \xi. If we try and find the condition such that that g_{ij}(x) = g_{ij}(x&#039;), i.e. the components don't change, we end up with the condition on \xi that I mentioned in an earlier post, namely, \xi_{a;b} + \xi_{b;a} = 0. These are also killing vector fields, or isometries of the metric.

garrett said:
Now, as for the Clifford algebra question: The group element is an exponential of a bivector, g = e^T, so it is a mixed, even graded multivector. Taking its derivative "brings down" a bivector, so there is no grade inconsistency. Grade consistency is a good thing to keep an eye on though, and we'll use it later.

Hmm, there is an inconsistency. Acting on it with a scalar derivative \frac{\partial}{\partial x^i}, which we appear to be doing, doesn't change the grade at all. I would agree with you if we were contracting it with the vector derivative, \nabla = e^i \partial_i. That's not what's happening here though is it?

Did the rest of the previous post make sense?

2) What, explicitly, are the other three Killing vector fields, associated with the right action of the generators?
3) What would you like to see next: Clifford algebra and rotations, or the group manifold metric and geometry?

Yes, it's making sense. I've not expanded the trace out yet, or calculated the fields associated with the right action. I was hoping to do it tonight, but I'm not going to get the chance it seems.

Let's do the clifford stuff as there is an open question about the grade lowering stuff. I'm going to be out of the country over the weekend, and already know most of the clifford stuff - so it will give the others (are there any others? :) a chance to catch up.

p.s. using g^- to indicate the inverse; I like that. It like it more that it was Knuth's :). I don't use enough of his stuff (directly).
 
Last edited:
  • #15
Hi garrett

This PF is very interresting... just reading the answer let you learn a lot...
 
  • #16
OK, we'll talk about Clifford algebra a bit.

First, to answer the grade consistency question: For our three dimensional Clifford algebra (it's actually $2^3=8$ dimensional, but with three basis vectors) our su(2) group element, g, is a scalar plus a bivector. What grades do you get if you multiply this times an arbitrary bivector? You can't get a four-vector, since there is none in the algebra, so you get... a scalar plus a bivector. The grades match on both sides. Happy?
 
  • #17
Hi Mehdi, welcome to Physics Forums.

I'll try to be back later tonight to relate this group and Lie algebra stuff to Clifford algebra, which you should find interesting.
 
  • #18
All my following comments will be extracted or inspired by articles written on the internet by R.F.J. van Linden. However I will not give the internet address of the articles just to let you comment the theory without being tempted to adopt to quickly the view of the author R.F.J. van Linden.

Van Linden :
” From various points of view a fractal-like universe is described. Unlike in usual fractals, recurring patterns correlate with the number of dimensions in the observation, i.e., zooming out and in occurs by adding or removing dimensions rather than changing scale.”

Van Linden :
… “Some point-shaped being lives on a circle. His limited 1D vision makes him observe the circle as a straight line. To actually see the extrinsic curvature he would need to have 2D vision.”

Van Linden:
“What behaves like a wave in n-dimensions behaves like a particle in
(n-1)-dimensions”
… “...being the basis for wave-particle duality and Heisenberg's uncertainty relations.
So photons behave like particles in 3D and waves in 4D. Mass behaves like particles in 4D and waves in 5D, and so on. The particle nature of a photon results from the way we observe its 4D wave pattern in 3D.”

Mehdi:
“Let’s try to put some equations to theses comments above from the perspective of the Kaluza-Klein theory for example !”
 
  • #19
From Wikipedia encyclopedia:
"Kaluza-Klein theory (or KK theory, for short) is a model which sought to unify classical gravity and electromagnetism, first published in 1921.
It was discovered by the mathematician Theodor Kaluza that if general relativity is extended to a five-dimensional spacetime, the equations can be separated out into ordinary three-dimensions, gravitation, plus an extra set, which is equivalent to Maxwell's equations for the electromagnetic field, plus an extra scalar field known as the dilaton (In theoretical physics, dilaton originally referred to a theoretical scalar field. In 1926, Oskar Klein Oskar klein proposed that the fourth spatial dimension is curled up in a circle of very small radius i.e. that a particle moving a short distance along that axis would return to where it began. The distance a particle can travel before reaching its initial position is said to be the size of the dimension. This, in fact, also gives rise to quantization of charge, as wave directed along a finite axis can only occupy discrete frequencies. (This occurs because electromagnetism is a U(1) symmetry theory and U(1) is simply the group of rotations around a circle."
 
  • #20
Hey Mehdi, thanks for the quotes, but... I'd like to use this forum for a mathematically oriented discussion. It's easy to string words together to make speculative descriptions of physics (and many do), but the real work and understanding comes from putting math together in the right way. Once that's done, you can talk about it a bit (in a way that's backed by the math).

We will get to some of these ideas, mathematically, but we're still several posts (and many calculations) away from Kaluza-Klein theory. But we will get there! I'd just like to build it up step by step.

If you want my opinion on the Van Linden quotes: I think they're mostly worthless. You have to do a heck of a lot of work before you can say anything potentially true and interesting about the universe -- and it's clear that hasn't been done by the author. Of course, that's just my opinion.

If you're eager to get to the real stuff, understanding (and being able to reproduce) the calculations in this thread should be a good start. Once again, just my opinion.
 
  • #21
A Lie algebra L is a linear space spanned by a basis X_K, and possessing an antisymmetry product [.,.] that obeys

[X_i,X_j]=c_{ij}^{k} X_k

over some field K, where [.,.] is the antisymmetric Lie product, and real c_{ij}^{k} are the structure constants of the algebra.

Lie algebras can be classified by the structure of their Cartan metric or Killing form.

The Cartan metric is defined by :
g_{ij} = c_{im}^{n} c_{jn}^{m}

The Killing form is defined in terms of adjoint representation of the algebra: Associate with any element A of L a linear transformation adj(A) defined by the left action of the algebra on itself.
For any Y in L, [A, Y] is also in L. We can define the adjoint representation adj(A) by adj(A)Y = [A, Y]

In particular, for fixed k, let A = X_k and represent Y on the algebra basis X_j, so that

Y = y^j X_j

then

adj(X_k) = y^j


= y^j c_{kj}^i X_i [X_k, X_j]



where the y^j and the X_i transform contragrediently to each other under the group of basis transformations in the algebra.
The adjoint representation of the group is irreducible for any simple Lie group.
 
Last edited:
  • #22
Hey Mehdi,
I don't wish to dissuade you, or discourage your interest, but try to limit your posts a bit, maybe? I'm trying to introduce things at an elementary level, with very explicit and illustrative examples. It doesn't help things to have you quoting random snippets from other sources. Keep in mind that I've digested ALL this stuff, and what I'm trying to do is present it in a way that's especially coherent -- with an aim for exactly where I want to go many posts in the future. Specifically, I won't be using that choice of metric you posted.

Maybe try to do the "homework" problems I wrote, instead? :) I presented them to be learned from.

But as long as you bring it up, I want to change something I said in the first post: Becuse of the unconventional \frac{1}{2} floating around in my anti-symmetric bracket, my statement of the structure coefficients should have been C_{AB}{}^C=-2 \epsilon_{ABC}. My mistake.

Anyway, I'm thinking up the best way to present Clifford algebra in the Lie algebra context -- will post that soon.

Also, soon enough, we'll get to an area where I won't have answers in mind, and will probably open things up to new directions from others.

Thanks,
Garrett
 
  • #23
garrett said:
Hey Mehdi,
I don't wish to dissuade you, or discourage your interest, but try to limit your posts a bit, maybe? I'm trying to introduce things at an elementary level, with very explicit and illustrative examples.
Garrett, I wanted to thank you for that, This thread will b highly beneficial to me. I want to learn and understand all that stuff but I find frustrating that explicit calculations are never shown (I have never seen any book that shows explicitly all the calculations worked out for a few diffreent groups. There may be some but I am not aware of them).

So your efforts in presenting *explicit* calculations and building slowly the material is highly appreciated! I had not noticed the thread before but I will start going over it this weekend. The bad news is that you will have tons of questions from me :redface:


Patrick
 
  • #24
Thanks Patrick -- this is exactly the issue I'm trying to remedy with this thread, and with the wiki. More than any other method, I learn best by studying the simplest non-trivial examples behind any concept in detail.

So now I'd like to take a tangent into Clifford algebra, which will immediately be related to the su(2) Lie algebra example we've started, and come in very handy later when we work out its geometry.

There are two really nice things about Clifford algebra which draw people to it. The first is that it's a "geometric" algebra: two vectors (grade 1) multiply to give a scalar (grade 0) plus a bivector (grade 2), a vector and a bivector multiply to give a vector plus a trivector (grade 3), etc. The second really nice thing is how rotations are calculated -- bivectors are crossed with any element to rotate it in the plane of that bivector -- which is much nicer than building rotation matrices, especially in higher dimensions. There's also a third, more obscure reason to like Clifford algebra -- it is needed to describe spinors, which are the fields needed to describe fermions. Anyway, on to the simplest nontrivial example...

The Clifford algebra of three dimensional space:

http://deferentialgeometry.org/#[[three dimensional Clifford algebra]]

This algebra, Cl_3, is generated by all possibly multiplicative combinations of three basis vectors, \sigma_\iota. These basis vectors have a matrix representation as the three Pauli matrices, \sigma_\iota = \sigma_\iota^P given earlier in this thread, with matrix multiplication equivalent to Clifford multiplication. The eight Clifford basis elements are formed by all possible products of these Clifford basis vectors. They are the the scalar, 1 (equivalent to the 2x2 identity matrix) the three basis vectors, \sigma_1, \sigma_2, \sigma_3, the three bivectors, \sigma_{12}, \sigma_{13}, \sigma_{23}, and the psuedoscalar, \sigma = \sigma_1 \sigma_2 \sigma_3 (equivalent to the 2x2 identity matrix times the unit imaginary, i.). The complete multiplication table for the algebra is (row header times column header equals entry):

<br /> \begin{array}{cccccccccc}<br /> &amp; | &amp; 1 &amp; \sigma_1 &amp; \sigma_2 &amp; \sigma_3 &amp; \sigma_{12} &amp; \sigma_{13} &amp; \sigma_{23} &amp; \sigma \\<br /> - &amp; + &amp; - &amp; - &amp; - &amp; - &amp; - &amp; - &amp; - &amp; - \\<br /> 1 &amp; | &amp; 1 &amp; \sigma_1 &amp; \sigma_2 &amp; \sigma_3 &amp; \sigma_{12} &amp; \sigma_{13} &amp; \sigma_{23} &amp; \sigma \\<br /> \sigma_1 &amp; | &amp; \sigma_1 &amp; 1 &amp; \sigma_{12} &amp; \sigma_{13} &amp; \sigma_2 &amp; \sigma_3 &amp; \sigma &amp; \sigma_{23} \\<br /> \sigma_2 &amp; | &amp; \sigma_2 &amp; -\sigma_{12} &amp; 1 &amp; \sigma_{23} &amp; -\sigma_1 &amp; -\sigma &amp; \sigma_3 &amp; -\sigma_{13} \\<br /> \sigma_3 &amp; | &amp; \sigma_3 &amp; -\sigma_{13} &amp; -\sigma_{23} &amp; 1 &amp; \sigma &amp; -\sigma_1 &amp; -\sigma_2 &amp; \sigma_{12} \\<br /> \sigma_{12} &amp; | &amp; \sigma_{12} &amp; -\sigma_2 &amp; \sigma_1 &amp; \sigma &amp; -1 &amp; -\sigma_{23} &amp; \sigma_{13} &amp; -\sigma_3 \\<br /> \sigma_{13} &amp; | &amp; \sigma_{13} &amp; -\sigma_3 &amp; -\sigma &amp; \sigma_1 &amp; \sigma_{23} &amp; -1 &amp; -\sigma_{12} &amp; \sigma_2 \\<br /> \sigma_{23} &amp; | &amp; \sigma_{23} &amp; \sigma &amp; -\sigma_3 &amp; \sigma_2 &amp; -\sigma_{13} &amp; \sigma_{12} &amp; -1 &amp; -\sigma_1 \\<br /> \sigma &amp; | &amp; \sigma &amp; \sigma_{23} &amp; -\sigma_{13} &amp; \sigma_{12} &amp; -\sigma_3 &amp; \sigma_2 &amp; -\sigma_1 &amp; -1 \\<br /> \end{array}<br />

(Things don't get more explicit than that. ;)

The whole table may be reproduced from the fundamental rules of Clifford algebra: Start with a set of basis vectors, like \sigma_i, which may be visualized as an orthonormal set. Multiplying two identical vectors gives 1, like \sigma_1 \sigma_1 = 1 (or gives -1 for some Lorentz geometry vectors (to come later)). Otherwise, vectors anti-commute, like \sigma_1 \sigma_2 = - \sigma_2 \sigma_1 = \sigma_{12}. That's it! The other rules are the familiar associative and distributive rules for multiplication and addition.

It is also very useful to break this product into symmetric (dot) and antisymmetric (cross) products:
<br /> A \cdot B = \frac{1}{2} (A B + B A)<br />
<br /> A \times B = \frac{1}{2} (A B - B A) <br />

Now we find su(2) in here... the subalgebra formed by the three bivectors under the cross product is the su(2) Lie algebra. The identification of generators is
<br /> \begin{array}{ccc}<br /> T_1 = i \sigma_{1}^{P} = \sigma_{23},<br /> &amp;<br /> T_2 = i \sigma_{2}^{P} = -\sigma_{13},<br /> &amp;<br /> T_3 = i \sigma_{3}^{P} = \sigma_{12}<br /> \end{array}<br />
and looking at the multiplication table shows this subalgebra has the same structure coefficients as su(2), and is therefore equivalent.

Now we look at the SU(2) element Joe calculated earlier:
<br /> g = e^{x^A T_A} = \cos(r) + x^A T_A \frac{\sin(r)}{r} <br />
and see that this, which we interpreted before as a 2x2 matrix, is a mixed grade Clifford element.

Next I want to use this to do 3D rotations. But first, a real quick question to make sure you're awake:

I said the g above are of "mixed grade" -- what exactly are the grades in g? (Choose from {0,1,2,3})

Someone answer this and I'll go on to rotations. :)

(And I'm still hoping someone will calculate the Killing vectors corresponding to right acting generators -- it will be important for the SU(2) geometry)
 
Last edited:
  • #25
garrett said:
So now we can just calculate that out explicitly, which is made easy by the nice form of g you found:
<br /> \xi^-_i{}^B = - &lt; \left( (T_i - x^i) \frac{\sin(r)}{r} + x^i x^j T_j ( \frac{\cos(r)}{r^2} - \frac{\sin(r)}{r^3}) \right) \left( \cos(r) - x^k T_k \frac{\sin(r)}{r} \right) T_B &gt;<br />
The Pauli matrices are traceless, so only a few terms will survive the trace, with the generator orthogonality under the trace used again to give
<br /> \xi^-_i{}^B = \delta_{iB} \frac{\sin(r)\cos(r)}{r} + x^i x^B ( \frac{1}{r^2} - \frac{\sin(r)\cos(r)}{r^3} )<br />

What about the higher order terms? There also appear to be terms proportional to &lt; T_i T_j T_k &gt; = 2 \epsilon_{ijk}. Why are we neglecting these?
 
  • #26
garrett said:
Answering your other questions:

For now, r = \sqrt{x1^2 + x2^2 + x3^2} is best thought of as just a notational convenience.

Ok, but there are surely some bounds on the validity of group element then as we expanded in a power series the answer is only going to be valid for small x and small r; the series will break down for large coordinates.
 
  • #27
Taoy said:
What about the higher order terms? There also appear to be terms proportional to &lt; T_i T_j T_k &gt; = 2 \epsilon_{ijk}. Why are we neglecting these?

Because I missed that term! You're right, I thought those would all drop out, but they don't -- one of them does survive. ( By the way, becuase of the way I defined <> with a half in it, it's &lt; T_i T_j T_k &gt; = \epsilon_{ijk} ) So, the correct expression for the inverse Killing vector field should be
<br /> \xi^-_i{}^B = - &lt; \left( (T_i - x^i) \frac{\sin(r)}{r} + x^i x^j T_j ( \frac{\cos(r)}{r^2} - \frac{\sin(r)}{r^3}) \right) \left( \cos(r) - x^k T_k \frac{\sin(r)}{r} \right) T_B &gt;<br />
<br /> = \delta_{iB} \frac{\sin(r)\cos(r)}{r} + x^i x^B ( \frac{1}{r^2} - \frac{\sin(r)\cos(r)}{r^3} ) + \epsilon_{ikB} x^k \frac{\sin^2(r)}{r^2}<br />

Thanks for catching that! ( It's why I asked question (1) )

And now I have to go figure out what the inverse of that is...
 
  • #28
Taoy said:
Ok, but there are surely some bounds on the validity of group element then as we expanded in a power series the answer is only going to be valid for small x and small r; the series will break down for large coordinates.

The expression you calculated,
<br /> g(x) = e^{x^i T_i} = \cos(r) + x^i T_i \frac{\sin(r)}{r}<br />
is a perfectly valid element of SU(2) for all values of x. Go ahead and multiply it times its Hermitian conjugate and you'll get precisely 1.

There is something interesting going on with the domain of the x though, so I'm glad you brought it up. The expression for g is periodic in the x. This is best seen by setting two x's to 0 while letting the other range from 0 to 2 \pi, at which point g is the identity again. Now, to cover all points of SU(2) exactly once, it may be the case that all three x's range from 0 to 2 \pi, and that does it -- but I kind of doubt that's true. What I've done in the past is convert the x's to angular coordinates,
<br /> x^{1} = r\sin(\theta)\cos(\phi)<br />
<br /> x^{2} = r\sin(\theta)\sin(\phi)<br />
<br /> x^{3} = r\cos(\theta)<br />
which simplifies things a little. But I wanted to try staying in x coordinates for now.
 
Last edited:
  • #29
garrett said:
<br /> \xi^-_i{}^B <br /> = \delta_{iB} \frac{\sin(r)\cos(r)}{r} + x^i x^B ( \frac{1}{r^2} - \frac{\sin(r)\cos(r)}{r^3} ) + \epsilon_{ikB} x^k \frac{\sin^2(r)}{r^2}<br />
And now I have to go figure out what the inverse of that is...

<br /> \xi_B{}^i <br /> = \delta_{Bi} \frac{r \cos(r)}{\sin(r)} + x^B x^i ( \frac{1}{r^2} - \frac{\cos(r)}{r \sin(r)} ) + \epsilon_{Bik} x^k<br />

:)

By the way, if you're trying to do this yourself by hand, I calculated the inverse by making the ansatz:
<br /> \xi_B{}^i <br /> = \delta_{Bi} A + x^B x^i B + \epsilon_{Bik} x^k C<br />
and solving for the three coefficients.

Now I'm going for a bike ride, then coming back to do rotations.
 
  • #30
Originally Posted by Garrett :
What I've done in the past is convert the x's to angular coordinates,
x^{1} = r\sin(\theta)\cos(\phi)
x^{2} = r\sin(\theta)\sin(\phi)
x^{3} = r\cos(\theta)
which simplifies things a little. But I wanted to try staying in x coordinates for now.

It look like if we have parametrized the coordinates by means of angles \theta and \phi.
It is related to the condition {({x^1})^2 + ({x^2})^2+ ({x^3})^2}=1.
IF x^i are interpreted as coordinates in a space R^3, this condition describes the unit sphere S^2 embedded in that space.
The sphere S^2 is a smooth manifold, every closed curve on it can be contracted to a point, it is singly connected.

But when we use the above parametrization, which map are we defining :
S^2 onto SO(3) or S^2 onto SU(2) ? or maybe we have to use one more parametrization which parametrize the angles \theta and \phi by means of angles \alpha and \beta for example to identify opposite antipodal points on the sphere ?
 
Last edited:
  • #31
Hey Mehdi, nice question. Using this angular parameterization, with a constant r, we have a map from S2 into SU(2). When I show the map from SU(2) to SO(3), (rotations) we'll see that this S2 corresponds to the orientation of the plane of the rotation, and the r value corresponds to the rotation amplitude, or angle.
 
  • #32
rotations

Alright, we have finally come around to rotations. Let's make a rotation using Clifford algebra. First, what do you get when you cross a vector with a bivector? Starting with an arbitrary vector,
<br /> v = v^1 \sigma_1 + v^2 \sigma_2 + v^3 \sigma_3<br />
and, for example, a "small" bivector in the xy plane,
<br /> B = \epsilon \sigma_{12}<br />
their cross product gives
<br /> v \times B = \epsilon ( v^1 \sigma_1 \times \sigma_{12} + v^2 \sigma_2 \times \sigma_{12} + v^3 \sigma_3 \times \sigma_{12})<br /> <br /> = \epsilon ( v^1 \sigma_2 - v_2 \sigma_1)<br />
This new vector, v \times B, is perpendicular to v, and in the plane of B. This "small" vector is the one that needs to be added to v in order to rotate it a small amount counter-clockwise in the plane of B:
<br /> v&#039; \simeq v + v \times B \simeq (1 + \frac{1}{2} \epsilon \sigma_{12}) v (1 - \frac{1}{2} \epsilon \sigma_{12})<br />
where the "\simeq" holds to first order in \epsilon. Infinitesimal rotations like these can be combined to give a finite rotation,
<br /> v&#039; = \lim_{N \rightarrow \infty} (1+ \frac{1}{N} \frac{1}{2} \theta \sigma_{12}) v ((1- \frac{1}{N} \frac{1}{2} \theta \sigma_{12})<br /> <br /> = e^{\frac{1}{2} \theta \sigma_{12}} v e^{-\frac{1}{2} \theta \sigma_{12}} = U v U^-<br />
using the "limit" definition for the exponential. This is an exact expression for the rotation of a vector by a bivector. In three dimensions an arbitrary bivector, B, can be written as
<br /> B = \theta b<br />
an amplitude, \theta, multiplying a unit bivector encoding the orientation, bb=-1. The exponential can then be written using Joe's expression for exponentiating a bivector:
<br /> U = e^{\frac{1}{2} B} = \cos(\frac{1}{2} \theta) + b \sin(\frac{1}{2} \theta)<br />
And an arbitrary rotation in any plane can be expressed efficiently as v&#039; = UvU^-. For example, for a rotation of an arbitrary vector by B=\theta \sigma_{12}, the result (using some trig identities) is:
<br /> v&#039; = e^{\frac{1}{2} B} v e^{-\frac{1}{2} B}<br /> = (\cos(\frac{1}{2} \theta) + \sigma_{12} \sin(\frac{1}{2} \theta) (v^1 \sigma_1 + v^2 \sigma_2 + v^3 \sigma_3) (\cos(\frac{1}{2} \theta) - \sigma_{12} \sin(\frac{1}{2} \theta)<br />
<br /> = (v^1 \cos(\theta) + v^2 \sin(\theta) ) \sigma_1 + (v^2 \cos(\theta) - v^1 \sin(\theta)) \sigma_2 + v^3 \sigma_3<br />
This is widely considered to be pretty neat, and useful as a general method of expressing and calculating rotations.

Now, we already established that elements of the group SU(2) may be represented as exponentials of bivectors, so these U are SU(2) elements! The "double cover" relationship between SU(2) and rotations (the group SO(3)) is in the expression
<br /> v&#039; = UvU^-<br />
It is the fact that two different SU(2) elements, U and -U, give the same rotation. That's all there is to it.

To be painfully explicit, it is possible to relate all this to rotation matrices. A rotation matrix is a 3x3 special orthogonal matrix that transforms one set of basis vectors into another. This equates to the Clifford way of doing a rotation as:
<br /> \sigma&#039;_i = L_i{}^j \sigma_j = U \sigma_i U^-<br />
For any rotation encoded by U (which, as the exponential of a bivector, also represents an arbitrary SU(2) element), the corresponding rotation matrix elements may be explicitly calculated using the trace as
<br /> L_i{}^j = \left&lt; U \sigma_i U^- \sigma_j \right&gt;<br />

Using Clifford algebra, you think of a rotation as being in a plane (or planes), described by a bivector. This generalizes very nicely to dimensions higher than three, such as for Lorentz transformations and for rotations in Kaluza-Klein theory.

It's a little odd if you haven't seen it before -- any questions?
 
  • #33
Garrett it is beautiful... I have no question... it is well explained and therefore easy to understand.

You have successfully established a relation between SU(2), SO(3), rotation matrix and Clifford algebra Cl_{ 0,2 }(R)... (Spin(3) group is the universal covering group of SO(3) ?!? and an accidental isomorphims with SU(2) and Sp(1) ?!?).

Maybe one day, you could do the same with an other group, let's say, the symplectic group and it's relation to Clifford algebras (using Lagrangians or Hamiltonians to make the examples more explicit)... Garrett…it's only a wish... ;)
 
Last edited:
  • #34
Originally Posted by garrett:
The exponential can then be written using Joe's expression for exponentiating a bivector:
U = e^{\frac{1}{2} B} = \cos(\frac{1}{2} \theta) + b \sin(\frac{1}{2} \theta)
And an arbitrary rotation in any plane can be expressed efficiently as v&#039; = U v U^-.
Can we then say that U is a rotor ?
If U is a rotor, we can then say that this rotor is an element of SU(2) group.
 
Last edited:
  • #35
Yes, you can call it a rotor, but that's kind of an old term. The more modern description is that it's an element of the Spin group, and in this 3D case, Spin(3)=SU(2).

Here's a wikipedia reference (good reading!):
http://en.wikipedia.org/wiki/Spin_group
 
  • #36
Hi Garrett

Can we then say that the quaternion of norm 1 belong to SU(2) group ?
 
  • #37
I know that spinors are related to quaternions... tomorrow I will try to find the link between them...
 
  • #38
I messed up a couple of expressions in the last math post.

First, all incidences of "v \times B" should be "B \times v" with a corresponding change of sign where relevant.

Second, the expression for the limit of many infinitesimal rotations should be
<br /> \lim_{N \rightarrow \infty} \left( 1+ \frac{1}{N} \frac{1}{2} \theta \sigma_{12} \right)^N v \left( 1- \frac{1}{N} \frac{1}{2} \theta \sigma_{12} \right)^N<br />

Apologies.
 
  • #39
Mehdi_ said:
Can we then say that the quaternion of norm 1 belong to SU(2) group?
Yes.

The three basis quaternions are the same as the SU(2) generators, which are the same as the Cl_3 bivectors. The quaternion and/or SU(2) group element, U, is represented by coefficients multiplying these, plus a scalar. And U satisfies UU^\dagger = 1.

I know that spinors are related to quaternions... tomorrow I will try to find the link between them...
Heh. Read my last paper. :)
But a discussion of spinors shouldn't go in this thread (yet). Maybe start another one?
 
Last edited:
  • #40
garrett said:
Great!

The related wiki page is here:

http://deferentialgeometry.org/#[[vector-form algebra]]

Explicitly, every tangent vector gets an arrow over it,
<br /> \vec{v}=v^i \vec{\partial_i}
and every 1-form gets an arrow under it,
\underrightarrow{f} = f_i \underrightarrow{dx^i}
These vectors and forms all anti-commute with one another. And the coordinate vector and form basis elements contract:
<br /> \vec{\partial_i} \underrightarrow{dx^j} = \delta_i^j<br />
so
<br /> \vec{v} \underrightarrow{f} = v^i f_i<br />

Sorry for taking so much time to absorb all of this but although I have heard all the terms mentioned in this thread, I am still learning all that stuff.

A quick question: what do you mean by "the vectors and forms all anticommute with one another"??
Ithought that one could think of "feeding" a vector to a one-form or vice-versa and that the result was the same in both cases. I guess I don't see where anticommutation might arise in that situation. Could you explain this to me?

Thanks again for a great thread!

Patrick
 
  • #41
These vectors and forms all anti-commute with one another should means:
\vec{v}=v^i \vec{\partial_i}=-\vec{\partial_i}v^i
\underrightarrow{f} = f_i \underrightarrow{dx^i}=-\underrightarrow{dx^i}f_i

That means that order is important... it is a non-commutative algebra
 
Last edited:
  • #42
\gamma_1 and \gamma_2 are perpendicular vectors

We start with a vector v equal to \gamma_1 and form another v' by adding a tiny displacement vector in a perpendicular direction :

v=\gamma_1and v&#039;=\gamma_1+\epsilon\gamma_2

and similarly, We start now with a vector v equal to \gamma_2 and form another v' by adding a tiny displacement vector in a perpendicular direction :

v=\gamma_2 and v&#039;=\gamma_2-\epsilon\gamma_1

The minus sign occurs because the bivectors \gamma_1\gamma_2 and \gamma_2\gamma_1 induce rotations in opposite directions

Let's construct a rotor r as follow:
r=vv&#039;=\gamma_1(\gamma_1+\epsilon\gamma_2)=(1+\epsilon\gamma_1\gamma_2)

Let’s see what happens when we use this rotor to rotate something with N copies of an infinitesimal rotation:

v&#039;= {(1+\epsilon\gamma_2\gamma_1)}^Nv{(1+\epsilon\gamma_1\gamma_2)}^N

But in the limit:

{(1+\epsilon\gamma_2\gamma_1)}^N=exp(N\epsilon\gamma_1\gamma_2)
=exp(\theta\gamma_1\gamma_2)=1+\theta\gamma_1\gamma_2-{\frac{1}{2}}{\theta}^2-...

and we find that:

r(\theta)=\cos(\theta)+\gamma_1\gamma_2\sin(\theta)

which is similar to Joe's expression for exponentiating a bivector:

U = e^{\frac{1}{2} B} = \cos(\frac{1}{2} \theta) + b \sin(\frac{1}{2} \theta)

Even if in Joe's expression we have (\frac{1}{2}\theta) the two equation are similar because the rotor angle is always half the rotation...
 
Last edited:
  • #43
Sure Patrick, glad you're liking this thread.

By "the vectors and forms all anticommute with one another" I mean
<br /> \underrightarrow{dx^i} \underrightarrow{dx^j} = - \underrightarrow{dx^j} \underrightarrow{dx^i}<br />
which is the wedge product of two forms, without the wedge written. And
<br /> \vec{\partial_i} \vec{\partial_j} = -<br /> \vec{\partial_j} \vec{\partial_i}<br />
which tangent vectors have to do for contraction with 2-forms to be consistent. And
<br /> \vec{\partial_i} \underrightarrow{dx^j} = -<br /> \underrightarrow{dx^j} \vec{\partial_i} = \delta_i^j<br />
which is an anticommutation rule you can avoid if you always write vectors on the left, but otherwise is necessary for algebraic consistency.

1-form anticommutation is pretty standard, as is vector-form contraction -- often called the vector-form inner product. The vector anticommutation follows from that. And the vector-form anticommutation from that. (Though I haven't seen this done elsewhere.) It makes for a consistant algebra, but it's non-associative for many intermixed vectors and forms, so you need to use parenthesis to enclose the desired contracting elements.
 
Last edited:
  • #44
Mehdi_ said:
These vectors and forms all anti-commute with one another should means:
\vec{v}=v^i \vec{\partial_i}=-\vec{\partial_i}v^i
\underrightarrow{f} = f_i \underrightarrow{dx^i}=-\underrightarrow{dx^i}f_i

That means that order is important... it is a non-commutative algebra

Nope, the v^i and f_i are scalar coefficients -- they always commute with everything. (Err, unless they're Grassmann numbers, but we won't talk about that...)

Mehdi's other post was fine.
 
Last edited:
  • #45
Garrett...oups... that's true...
 
  • #46
garrett said:
Sure Patrick, glad you're liking this thread.

By "the vectors and forms all anticommute with one another" I mean
<br /> \underrightarrow{dx^i} \underrightarrow{dx^j} = - \underrightarrow{dx^j} \underrightarrow{dx^i}<br />
which is the wedge product of two forms, without the wedge written. And
<br /> \vec{\partial_i} \vec{\partial_j} = -<br /> \vec{\partial_j} \vec{\partial_i}<br />
which tangent vectors have to do for contraction with 2-forms to be consistent. And
<br /> \vec{\partial_i} \underrightarrow{dx^j} = -<br /> \underrightarrow{dx^j} \vec{\partial_i} = \delta_i^j<br />
which is an anticommutation rule you can avoid if you always write vectors on the left, but otherwise is necessary for algebraic consistency.

1-form anticommutation is pretty standard, as is vector-form contraction -- often called the vector-form inner product. The vector anticommutation follows from that. And the vector-form anticommutation from that. (Though I haven't seen this done elsewhere.) It makes for a consistant algebra, but it's non-associative for many intermixed vectors and forms, so you need to use parenthesis to enclose the desired contracting elements.
:eek: I had never realized that!

Thank you for explaining this!

For the product of 1-form that's not surprising to me since I would assume a wedge product there.

But a product of vector fields is always understood in differential geometry or is it an added structure? It seems to me that one couls also introduce a symmetric product.. What is the consistency condition that leads to this?

Also, I really did not know that "contracting" a one-form and a vector field depended on the order! I have always seen talk about "feeding a vector to a one-form" and getting a Kronecker delta but I alwyas assumed that one could equally well "feed" the one form to the vector and get the *same* result. I had not realized that there is an extra sign. What is the consistency condition that leads to this?

Sorry for all the questions but one thing that confuses me when learning stuff like this is to differentiate what is imposed as a definition and what follows from consistency. I always wonder if a result follows from the need for consistency between precedent results or if it's a new defnition imposed by hand. But I don't necessarily need to see the complete derivation, if I can only be told "this follows from this and that previous results",then I can work it out myself.

Thank you!
 
  • #47
Certainly. I need to stress this is my own notation, so it is perfectly reasonable to ask me to justify it. Also, it's entirely up to you whether you want to use it -- everything can be done equally well in conventional notation, after translation. ( I just have come to prefer mine. )

The conventional notation for the inner product ( a vector, \vec{v}, and form, f, contracted to give a scalar ) in Frankel and Nakahara etc. is
<br /> i_{\vec{v}} f = f(\vec{v})<br />
which I would write as
<br /> \vec{v} \underrightarrow{f}<br />
I will write the rest of this post using my notation, but you can always write the same thing with "i"'s all over the place and no arrows under forms.

Now, conventionally, there is a rule for the inner product of a vector with a 2-form. For two 1-forms, the distributive rule is
<br /> \vec{a} \left( \underrightarrow{b} \underrightarrow{c} \right)<br /> = \left( \vec{a} \underrightarrow{b} \right) \underrightarrow{c}<br /> - \underrightarrow{b} \left( \vec{a} \underrightarrow{c} \right)<br />
Using this rule, one gets, after multiplying it out:
<br /> \vec{e} \vec{a} \left( \underrightarrow{b} \underrightarrow{c} \right)<br /> = - \vec{a} \vec{e} \left( \underrightarrow{b} \underrightarrow{c} \right)<br />
which is the basis for my assertion that
<br /> \vec{e} \vec{a} = - \vec{a} \vec{e}<br />
This sort of "tangent two vector" I like to think of as a loop, but that's just me being a physicist. ;)

So, now for the vector-form anti-commutation. Once again, keep in mind that you can do everything without ever contracting a vector from the right to a form -- this is just something I can do for fun. But, if you're going to do it, this expression should hold regardless of commutation or anti-commutation:
<br /> \vec{a} \left( \underrightarrow{b} \underrightarrow{c} \right)<br /> = \left( \underrightarrow{b} \underrightarrow{c} \right) \vec{a}<br />
and, analogously with the original distribution rule, that should equal:
<br /> = \underrightarrow{b} \left( \underrightarrow{c} \vec{a} \right)<br /> - \left( \underrightarrow{b} \vec{a} \right) \underrightarrow{c}<br />
Comparing that with the result of the original distribution rule shows that we must have
<br /> \underrightarrow{b} \vec{a} = - \vec{a} \underrightarrow{b}<br />
for all the equalities to hold true, since a vector contracted with a 1-form is a scalar and commutes with the remaining 1-form.

It won't hurt me if you don't like this notation. But do tell me if you actually see something wrong with it!
 
Last edited:
  • #48
garrett said:
By "the vectors and forms all anticommute with one another" I mean
<br /> \underrightarrow{dx^i} \underrightarrow{dx^j} = - \underrightarrow{dx^j} \underrightarrow{dx^i}<br />
which is the wedge product of two forms, without the wedge written. And
<br /> \vec{\partial_i} \vec{\partial_j} = -<br /> \vec{\partial_j} \vec{\partial_i}<br />
which tangent vectors have to do for contraction with 2-forms to be consistent. And
<br /> \vec{\partial_i} \underrightarrow{dx^j} = -<br /> \underrightarrow{dx^j} \vec{\partial_i} = \delta_i^j<br />
which is an anticommutation rule you can avoid if you always write vectors on the left, but otherwise is necessary for algebraic consistency.

Hi Garrett, I'm a bit confused about this notation. What kind of product are you using here, and are these really vectors? How can we make this notation compatible with the geometric product between vectors?

Oh, wait, I guess that you're just making the assumption that both the vector and the co-vector basis are orthogonal.

I'm reading that your \vec{\partial_i} is a vector such that \vec{\partial_i} .\vec{\partial_j} = \delta_{ij} | \vec{\partial_i} |^2. Is that right?
 
Last edited:
  • #49
The algebra of vectors and forms at a manifold point, spanned by the coordinate basis elements \vec{\partial_i} and \underrightarrow{dx^i}, are completely independent from the algebra of Clifford elements, spanned by \gamma_\alpha, or, if you like, they're independent of all Lie algebra elements. By the algebra being independent, I mean that all elements commute.

For example, when we calculated the derivative of a group element (to get the Killing fields), we were calculating the coefficients of a Lie algebra valued 1-form:
<br /> \underrightarrow{d} g = \underrightarrow{dx^i} G_i{}^A T_A<br />
The two sets of basis elements, \underrightarrow{dx^i} and T_A, live in two separate algebras.

The vector and form elements don't have a dot product, and I will never associate one with them. Some do, and call this a metric, but things work much better if you work with Clifford algebra valued forms, and use a Clifford dot product.

I might as well describe how this works...
 
  • #50
The link to the wiki notes describing the frame and metric is:

http://deferentialgeometry.org/#frame metric

but I'll cut and paste the main bits here.

Physically, at every manifold point a frame encodes a map from tangent vectors to vectors in a rest frame. It is very useful to employ the Clifford basis vectors as the fundamental geometric basis vector elements of this rest frame. The ''frame'', then, is a map from the tangent bundle to the Clifford bundle -- a map from tangent vectors to Clifford vectors -- and written as
<br /> \underrightarrow{e} = \underrightarrow{e^\alpha} \gamma_\alpha = \underrightarrow{dx^i} \left( e_i \right)^\alpha \gamma_\alpha<br />
It is a Clifford vector valued 1-form. Using the frame, any tangent vector, $\vec{v}$, on the manifold may be mapped to its corresponding Clifford vector,
<br /> \vec{v} \underrightarrow{e} = v^i \vec{\partial_i} \underrightarrow{dx^j} \left( e_j \right)^\alpha \gamma_\alpha = v^i \left( e_i \right)^\alpha \gamma_\alpha = v^\alpha \gamma_\alpha = v<br />
This frame includes the geometric information usually attributed to a metric. Here, we can compute the scalar product of two tangent vectors at a manifold point using the frame and the Clifford dot product:
<br /> \left( \vec{u} \underrightarrow{e} \right) \cdot \left( \vec{v} \underrightarrow{e} \right) <br /> = u^\alpha \gamma_\alpha \cdot v^\beta \gamma_\beta<br /> = u^\alpha v^\beta \eta_{\alpha \beta} <br /> = u^i \left( e_i \right)^\alpha v^j \left( e_j \right)^\beta \eta_{\alpha \beta}<br /> = u^i v^j g_{ij} <br />
with the use of frame coefficients and the Minkowski metric replacing the use of a metric if desired. Using component indices, the ''metric matrix'' is
<br /> g_{ij} = \left( e_i \right)^\alpha \left( e_j \right)^\beta \eta_{\alpha \beta}<br />

Using Clifford valued forms is VERY powerful -- we can use them to describe every field and geometry in physics.
 
Back
Top