# Differential Forms vs. Directed Measures

1. Mar 22, 2006

### kryptyk

Recently, I've begun to study the Geometric Algebra approach to differential geometry (Hestenes[84]) and although I do not claim to be an expert in this area (not at all!) I'm really starting to like what I see.

It seems a major problem with the differential forms approach is that it confuses directed measures with scalar measures - differential forms behave a bit like directed quantities but they also behave a bit like scalars.

For instance, take $$d x$$ and $$d y$$. We could construct 1-forms from linear combinations:

$$A\, d x + B\, d y$$

So in a sense, these 1-forms could be thought of as maps from vectors to scalars. On the other hand, when we use these 1-forms in integrals,

$$\int_S (A\, d x + B\, d y)$$

they are better thought of as scalar infinitesimals. This latter view is the way differentials are usually taught to all of us at first whereas the former seems totally foreign and weird, even for many advanced students of mathematics.

GA seems to provide a much more elegant way to deal with these ideas through the use of directed measures:

$$d\mathbf{x} = d x\, \mathbf{e}_x$$

$$d\mathbf{y} = d y\, \mathbf{e}_y$$

where $$\{\mathbf{e}_x,\mathbf{e}_y\}$$ is a frame.

This allows us to separate the scalar infinitesimals from the frame vectors. I think (I am still sketchy on the details) we can now explicitly write the 1-form as:

$$\alpha (d\mathbf{u})= (A\, d\mathbf{x} + B\, d\mathbf{y})^{\dagger}\cdot d\mathbf{u}$$

The integral now becomes:

$$\int_S (A\, d x\, \mathbf{e}_x + B\, d y\, \mathbf{e}_y)$$

and indeed, $$d x$$ and $$d y$$ can be treated the same way we always did and grew to know and love.

Also, we can construct a 2-dimensional directed measure by:

$$d\mathbf{x}\wedge d\mathbf{y}=(d x\, \mathbf{e}_x)\wedge(d y\, \mathbf{e}_y)$$

And since here, $$d x$$ and $$d y$$ are true scalar infinitesimals, they will commute with all elements of our algebra and the vectors $$\mathbf{e}_x$$ and $$\mathbf{e}_y$$ anticommute. So,

$$(d x\, \mathbf{e}_x)\wedge(d y\, \mathbf{e}_y)=\mathbf{e}_x\wedge\mathbf{e}_y\, d x\, d y$$

and

$$d x\, d y = d y\, d x$$

$$\mathbf{e}_x\wedge\mathbf{e}_y= - \mathbf{e}_y\wedge\mathbf{e}_x$$

The wedge product of two vectors produces a bivector - this bivector uniquely identifies a tangent plane at point $$(x,y)$$ and expresses an orientation for the plane. The scalar differentials just multiply directly. This extends naturally to higher-dimensional spaces, but to complete this 2-dimensional example, let's define bivector field $$I$$:

$$I=\mathbf{e}_x\wedge\mathbf{e}_y$$

$$d\mathbf{x}\wedge d\mathbf{y}=I\, d x\, d y$$

-------------

To my novice mind, this already begins to look much nicer than the typical differential forms methods - any particular reason it is not more often used?

Last edited: Mar 22, 2006
2. Mar 23, 2006

### Hurkyl

Staff Emeritus
I'm somewhat confused by your presentation, but I can point out some problems.

This equation has a type mismatch. The left hand side is a covector, while the right hand side is a vector! In coordinates, the L.H.S. would be a row vector, while the R.H.S. would be a column vector.

In particular, you cannot transpose a covector to get a vector, or take dot products, or anything like that.

Transposition and dot products require some choice of a bilinear form. (e.g. a metric) In Euclidean space, the metric is ridiculously simple: in an orthonormal basis, the components of a vector are the same as the components of its transpose! We grew up without being taught a distinction between the two ideas. (And, in fact, were somewhat encouraged to confuse them!)

And this isn't students -- this applies to very smart people too. I once had a fellow mathematician (a darned good one) exclaim "a whole new world has opened up to me!" when I worked out a (elementary but a little messy) problem that was giving us some difficulties by representing our vectors as columns and our covectors as rows. (and making a point to distinguish the two)

Now, you could talk about the "dual basis". That is, you have the basis $\hat{\mathbf{e}}^{x}, \hat{\mathbf{e}}^{y}$ which is defined by the relations:

$$\hat{\mathbf{e}}^{x}(\mathbf{e}_x) = 1$$
$$\hat{\mathbf{e}}^{x}(\mathbf{e}_y) = 0$$
$$\hat{\mathbf{e}}^{y}(\mathbf{e}_x) = 0$$
$$\hat{\mathbf{e}}^{y}(\mathbf{e}_y) = 1$$

and you can write any one-form as a linear combination of these dual basis covectors. I'm not entirely sure if that's a good idea or not, since part of the whole point of doing things geometrically is that you aren't tied down choosing bases and using coordinates!

Last edited: Mar 23, 2006
3. Mar 24, 2006

### kryptyk

Metrics and Reciprocal Frames

Of course, Hurky!

But to be even able to evaluate an integral it is necessary to define some metric.

RE: the type mismatch, I think it's more a matter of notational confusion here. By $$d \mathbf{x}$$ I'm defining a directed measure in the x direction as y is kept fixed, not a covector. But you're right, I suppose I was far from clear in this.

RE: the transpose of a vector. Again, perhaps another notational confusion here. I guess what I meant was a map from the vector space to the covector space that takes a vector to its reciprocal covector.

RE: the use of some particular frame. It was just to illustrate a simple example. But you're right - generally speaking it is unnecessary and undesirable to use a specific arbitrary frame when it is possible to avoid this.

I guess my basic point is that since differential forms express some kind of scalar measure to be used in an integral, it might make more sense to extend this idea to general multivector measures and separate the blades from the scalars.

Moreover, as suggested by Hestenes, I suspect that directed integration is the fundamental concept here and differential forms are auxillary to that. But as I said, I'm no expert in this stuff and would like to hear others' opinions on these matters. Here's a brief presentation of Hestenes' treatment:

http://modelingnts.la.asu.edu/html/NFMP.html

4. Mar 24, 2006

### Hurkyl

Staff Emeritus
That's not so! The metric plays no part in the definition of an integral.

For effect, I checked the table of contents in Spivak's Differential Geometry: differential forms is chapter 7, integration is chapter 8, and Riemannian metrics is chapter 9!

You still are.

I'm not sure what this means.

The (usual) 2-form $dx \wedge dy$ is perfectly capable of integrating something like $\vec{v} \otimes \vec{w}$: the tensor product of two vectors, and thus also their outer product $\vec{v} \wedge \vec{w} = \vec{v} \otimes \vec{w} - \vec{w} \otimes \vec{v}$.

But that's not the only problem: I don't think it makes any sense at all to talk about integrating a vector-valued function along a curve... maybe you can manage something by introducing a connection so you can parallel transport along your curve? I don't know, and of course that will be dependent on your choice of connection, unlike integrals that are scalar-valued combinations of vector fields and differential forms.

P.S. I'm not trying to say geometric algebra is a bad thing!

Last edited: Mar 24, 2006
5. Mar 24, 2006

### mathwonk

gee whiz. why not just learn something about integration before making assertions about it.

6. Mar 24, 2006

### garrett

Hurkl is right, but there's still plenty of fun to be had with Geometric Algebra kryptyk!

Differential forms are needed to define things like differential areas,
$$a = \frac{1}{2} dx^i dx^j a_{ij}$$
which can be integrated over a 2D surface to get an area, which is a real number with units like square meters. If you make your 1-forms commute, $$dx^i dx^j = dx^j dx^i$$, things won't work right -- for one thing, $$a$$ will be zero since its antisymmetric in its indices.

Probably the most common form you can run into is a connection, which is a 1-form valued in some Lie algebra,
$$A = dx^i A_i$$
with $$A_i \in Lie$$.

You want to do things like integrate this connection along a curve to get a Lie algebra element, so it has to be a 1-form.

Now the fun happens when you pick a Clifford algebra (or Geometric Algebra) as the Lie algebra. Then you have Clifford algebra valued differential forms, or Clifforms, to play with.

In GR, the equivalence principle says the spacetime around every manifold point is locally like Minkowski space. This means there's a map at every point that takes vectors to Clifford vectors, and this is the frame or vielbein:
$$e = dx^i (e_i)^\alpha \gamma_\alpha$$
a Clifford vector valued 1-form. It's much more natural to use that then a metric, and everything you can do with a metric (and more) you can do with the frame. Nice flat Minkowski space is really the domain of Geometric algebra. For example, the scalar product of two vectors is
$$(\vec{v} e)\cdot(\vec{u} e) = v \cdot u = v^\alpha u_\alpha = v^i u^j g_{ij}$$
It only gets more fun from there. For example, you can solve Cartan's equation,
$$0 = d e + \omega \times e$$
explicitly for the Clifford bivector valued 1-form connection, $$\omega$$, or write that out in components and solve it that way.

Expressions are extremely concise using Clifford valued forms.

Last edited: Mar 24, 2006
7. Mar 24, 2006

### mathwonk

even dtheta is already fun. it allows one to define winding numbers and prove fundamental theorem of algebra, brouwer fix point theorem, and in solid versaionm also non existence of vector fields on a 2-sphere.

8. Mar 25, 2006

### Hurkyl

Staff Emeritus
One doesn't have to use an external Clifford algebra -- one typical construction of Clifford algebras is as the quotient of a tensor algebra. Once you've chosen a metric, you could apply this to the usual tensor algebra built upon the tangent & cotangent spaces.

In case the OP is not familiar with this construction...

Once you've chosen the metric, you can enforce some relations upon the tensor algebra, such as:

$$\vec{v} \otimes \vec{w} + \vec{w} \otimes \vec{v} \equiv 2 \langle \vec{v}, \vec{w} \rangle$$
$$\omega \otimes \vec{v} + \vec{v} \otimes \omega \equiv 2 \omega(\vec{v})$$
$$\omega \otimes \varphi + \varphi \otimes \omega \equiv 2 \langle \omega, \varphi \rangle$$

(note that these imply, for example, $\vec{v} \otimes \vec{v} \equiv \langle \vec{v}, \vec{v} \rangle$)

In this way, the vectors and covectors become the ordinary vectors in your geometric algebra, and modulo these relations, the tensor product is the geometric product!

(Don't take my post as a suggestion that this approach is any better than garrett's! I just wanted to point it out as something else one can do. I'm not yet comfortable with the moving frame, although I acknowledge it's supposed to be very useful)

A question for garrett:

Are we guaranteed to be able to do this globally? I know that in any Riemann manifold, we can construct a frame locally through orthonormalization... but we're not guaranteed to be able to do it globally. (e.g. there ought to be problems with the two-sphere)

9. Mar 25, 2006

### garrett

We can define the frame globally if and only if the manifold is parallelizable. If we can't define it globally, such as is the case for the 2D sphere, then we have to define the frames on each different patch over the manifold and glue them together on the overlaps with transition functions -- just as you do with any connection. The transition functions for the frame are elements of the Lorentz group.

Last edited: Mar 25, 2006
10. Mar 25, 2006

### garrett

On the subject of vector and form algebra, I'd like to share a notation I like to use...

I put \underrightarrow's under forms, corresponding to their grade. So, a 2-form is
$$\underrightarrow{\underrightarrow{F}} = \frac{1}{2} \underrightarrow{dx^i} \underrightarrow{dx^j} F_{ij}$$
With this notation, vectors multiplying forms contract, and the over and under arrows add up. This vector multiplying a 2-form gives a 1-form:
$$\vec{v} \underrightarrow{\underrightarrow{F}} = v^i \vec{\partial_i} \frac{1}{2} \underrightarrow{dx^j} \underrightarrow{dx^k} F_{jk} = v^i \frac{1}{2} (\delta^j_i \underrightarrow{dx^k} - \underrightarrow{dx^j} \delta^k_i) F_{jk} = \underrightarrow{dx^k} v^j F_{jk}$$

To show off, Cartan's equation using Clifforms,
$$0 = \underrightarrow{d} \underrightarrow{e} + \underrightarrow{\omega} \times \underrightarrow{e}$$
can be solved explicitly for
$$\underrightarrow{\omega} = - \vec{e} \times (\underrightarrow{d} \underrightarrow{e}) + \frac{1}{4} (\vec{e} \times \vec{e})(\underrightarrow{e} \cdot (\underrightarrow{d} \underrightarrow{e}))$$
I don't expect that to make much sense to others without practice using Clifforms. But it does look pretty, eh?

11. Mar 25, 2006

### kryptyk

Thanks, Garrett - I'll have to spend a bit more time looking over what you've posted.

I'm still just barely starting to get the deeper insights of differential geometry so I claim no expertise. However, the characterization of tangent spaces at different points on a manifold does seem fundamental - and as such, it seems Geometric Algebra provides a useful way to describe it via a pseudoscalar-valued function. I never really worked with traditional tensor methods as such very much, but the moving frame idea seems so powerful I can hardly imagine doing differential geometry without it.

Let $$\{\mathbf{e}_k(\mathbf{x})\}$$ be a frame for the tangent space at $$\mathbf{x}$$. Then we can construct a pseudoscalar-valued function:

$$I_{m}(\mathbf{x}) = \mathbf{e}_{1}(\mathbf{x}) \wedge\mathbf{e}_{2}(\mathbf{x})\wedge \ldots\wedge \mathbf{e}_{m}(\mathbf{x})$$

$$I_m$$ has a unique inverse $$I_{m}^{-1}$$ given by reversing the order of the vectors and dividing by the square of its magnitude. Now we can do all kinds of neat things, like compute reciprocal frames with the following formula:

Suppressing the parameter,

$$\mathbf{e}^k = (-1)^{k-1}\, (\mathbf{e}_1 \wedge \ldots \wedge \check{\mathbf{e}}_k \wedge \ldots \wedge \mathbf{e}_m) \, I_{m}^{-1}$$

where the check denotes an exclusion of that vector from the product.

Furthermore, if we embed our manifold $$S^m$$ into a higher-dimensional manifold $$E^n$$, we can treat the tangent spaces of $$S^m$$ as vector spaces in $$E^n$$. Then to find the vectors $$\mathbf{u}$$ in the tangent space of $$S^m$$ at $$\mathbf{x}$$ we solve:

$$\mathbf{u} \wedge I_{m}(\mathbf{x}) = 0$$

$$I_m$$ is completely frame-independent and completely characterizes the orientation of the manifold. Could we use it to define connections?

P.S. Why is it that capital letters in TeX expressions like $$X$$ come out fine but lowercase letters like $$x$$ don't? Or is it just me that's seeing this?

Last edited: Mar 25, 2006
12. Mar 25, 2006

### Hurkyl

Staff Emeritus
Use [ itex ] instead of [ tex ] for stuff in paragraphs.

13. Mar 25, 2006

### garrett

Hey kryptyc,
Are you learning differntial geometry from Hestenes' "Clifford Algebra to Geometric Calculus"? Don't get me wrong, it's a good book. But really not want you want to learn differential geometry from. There is some overlap, but for the most part people won't know what you're talking about...

Do you have any other differential geometry books or at least papers?

14. Mar 25, 2006

### kryptyk

Nope, I don't really have any good books on differential geometry. I'm sure I can find some good materials online; perhaps you could recommend me some books and/or papers?

15. Mar 26, 2006

### George Jones

Staff Emeritus
Why do people always omit Garret's name when referring to this book?

Regards,
George

16. Mar 26, 2006

### garrett

Because David Hestenes is very popular as the guy who brought back Clifford algebra as applied to physics in a number of very acessible papers. Also, Garret Sobczyk is the second author -- they usually get short shrift, though I suspect he did most of the work in the book.

My favorite books on differential geometry applied to physics are
Nakahara, "Geometry, Topology, and Physics"
Frankel, "The Geometry of Physics"

Online resources...
You can always learn a lot by cruising around Wikipedia:
http://en.wikipedia.org/wiki/Differential_form
This has lots of pretty pictures, and shows the application to E&M:
http://www.ee.byu.edu/forms/forms_teaching_warnick.pdf
And I've started putting this together, but there's not much there yet:
http://deferentialgeometry.org/#[[vector-form algebra]]

17. Mar 27, 2006

### Doodle Bob

I would like to point out that there is actually no "problem" in the above example. In both uses of $A\, dx+B\, dy$, we're thinking the same thing.

In order for that integral to make any sense, S needs to be a curve, so we can parametrize it, t|-->S(t) with domain, say, [a,b]. Assuming that S is for the most part smooth, it has a tangent vector at each t, S_*(t). Thus, we can evaluate $(A\, dx+B\, dy)(S_*(t))$ at each t, which then is simply a function over the interval [a,b]. The integral $\int_S (A\, d x + B\, d y)$ then is simply the standard integral of a real-valued function over an interval. So, in essence, that integral is (b-a) times the average value of the 1-form $A\, dx+B\, dy$ on the tangents of the curve S.

There are many, many different ways to view diffl. forms and their uses, all motivated by certain desired results. For example, the relationship between global forms and their integrals as seen above also allows us to move back and forth between geometry and topology and gives us such wonderful results such as de Rham's Theorem.

But it boils down to choosing a formalism that allows one to study phenomena on geometric objects in the way one wants to. The point is that you shouldn't confuse the geometry (or topology) for the formalism, and by starting with the geometry you get a better sense for the reason of the formalism.

When it comes to resources on the web re: differential geometry, the old maxim is true: you get what you pay for. There is simply no good substitute for a good text and a good flesh-and-bones teacher. I've recommended these before, but I'll do it again:

1. Differential Geometry by John Oprea
2. An Introduction to Differentiable Manifolds... by William Boothby
3a. Differential Geometry of Curves and Surfaces by Do Carmo
3b. Riemannian Manifolds by Do Carmo

Last edited: Mar 27, 2006
18. Mar 29, 2006

### Haelfix

Everytime I read Geometric Algebra notation, my head feels like its going to fall off, and I get very irate.

I can't for the life of me understand what they are doing half the time. I know they are somehow less general than the standard treatment, but making what they are doing precise in regular language is an exercise in translation that I just don't feel compelled to do. For instance what they call clifford algebras is really a subset of the more general *algebraic* construction, also why they insist that every exterior algebra has to be understood solely by clifford algebras I cannot understand either.

The even more annoying thing is I cannot find a single piece of literature criticising this approach, or at least trying to translate between the two in a way that makes the restrictions clear.

Incidentally Vielbiens are used in standard textbooks every day, it is by no means a GA invention.

19. Mar 29, 2006

### kryptyk

Lobotomy

Not to digress entirely here - but I feel an interesting point has been brought up by Haelfix, and so I feel compelled to expound on a little mathematical philosophy.

Our brain is capable of working with symbolic/linguistic systems as well as with spacial/visual perceptions. To deny this latter ability any part in our mathematical and physical reasoning is effectively to give ourselves a lobotomy. If you need any greater evidence of this, eavesdrop sometime on physicists discussing tensor analysis or complex eigenspaces.

Mathematical tools are just that...tools. To the extent they can help us solve problems, they are useful tools. I have no problem with abstract algebraic constructions as such - but without geometric intuition I feel our tools are of extremely limited value.

As a simple example, consider expressing a rotation of a vector on some plane by angle $\theta$. In 2 dimensions, there is but one plane in which we can rotate. We can simply use matrices of the form:

$$\left(\begin{array}{cc} \cos \theta & - \sin \theta \\ \sin \theta & \cos \theta \end{array}\right)$$

In $n$ dimensions, there are $n(n-1)/2$ different planes on which we can rotate. If the plane of rotation is spanned by two of our basis vectors, it is easy to construct a matrix representation of the rotation. But if not then we must use constructions that can be quite cumbersome in practice to use.

This is a great example of where I think GA offers some significant advantages. Having found a unit bivector $B$ on the plane of rotation with appropriate orientation, we can construct a rotor:

$$R_{\theta} = e^{\frac{B \theta}{2}}$$

Now it is simple to express the rotation of vector $\mathbf{x}$ on plane $B$ by:

$$\underline{R_{\theta}}(\mathbf{x}) = R_{\theta}^{-1} \mathbf{x} R_{\theta}$$

$\underline{R_{\theta}}$ will rotate the component of $\mathbf{x}$ on plane $B$ by angle $\theta$ while leaving the component of $\mathbf{x}$ orthogonal to $B$ unchanged. This approach is identical for any number of dimensions and is entirely frame-independent.

What's more, this notation applies to nonEuclidean geometries as well. It is particularly well-suited for expressing Lorentz transformations in the hyperbolic geometry of Minkowski spacetime.

And we haven't even touched the topics of multivector analysis and analytic function theory yet...but those are topics for another day in another thread.

Having said that, if you don't find the GA notation particularly useful perhaps it is not the best tool for you to use. But evidently enough people find it useful enough to apply it to many diverse types of problems.

P.S. How do I get the subscripts in the above expressions to line up?

Last edited: Mar 29, 2006
20. Mar 29, 2006

### kryptyk

BTW, I haven't quite found anyone who is insisting this - it seems you've constructed a strawman here.