ObsessiveMathsFreak said:
On an aside, differential forms notation is terrible. Everything is just so lax!
I agree!
nrqed said:
Normally, we use nabla to represent the gradient operator which is not d.
The funny thing, there are
two different usages of the nabla operator. In Spivak, volume I, he defines:
\nabla = \sum_{i=1}^n D_i \frac{\partial}{\partial x^i}
and that \mathop{\mathrm{grad}} f = \nabla f
On the other hand, in volume II, we have the (Koscul) connection for which \nabla T is, by definition, the map X \rightarrow \nabla_X T. In particular, for a scalar field, we have \nabla_X f = X(f) so that \nabla f = df.
The funny thing is -- when I was taking multivariable calculus, I got into the habit of writing my vectors as column vectors, and my gradients as row vectors... so in effect, what I learned as the gradient
was a 1-form!
nrqed said:
For a viual interpretation, applying d basically gives the "boundary" of the form. Thinking of a one-form as a series of surfaces, if the surfaces never terminate (because they extend to infinity or they close up on themselves) then applying the extrerior derivative gives zero. )
There is supposed to be a duality between the exterior derivative and the boundary operator. (In fact, the exterior derivative is also called a "coboundary operator") But I think you're taking it a little too literally! I like to try and push the picture that forms "measure" things, and the (n+1)-form
dw measures an (n+1)-dimensional region by applying
w to the boundary of the region.
ObsessiveMathFreak said:
What I was thinking, and have been thinking for about a week now, is that forms should really be distinguished in some way, beyond their current, IMHO, rather loose fashion, to make it clear what they are.
Using the Greek alphabet, instead of the Roman one, isn't enough?
ObsessiveMathFreak said:
especially when it came to the final integration computations, when "dx" and "dx" would get mixed up.
How can they get mixed up?
nrqed said:
And yet, whenever books define intergrations over differential forms, they awlways get to the point where they define an integration over differential forms as an integral in the "usual" sense of elementary calculus. These expressions *do* contain the symbols dx, dy etc. So what do they mean *there*, if not "infinitesimals"!
The usual sense of elementary calculus doesn't have infinitessimals either. Depending on the context, it might be a formal symbol indicating with respect to which variable integration is to be performed, or it might be denoting which measure to be used... but certainly not an infinitessimal.
Even in nonstandard analysis, which
does have infinitessimals,
dx are
still not used to denote infinitessimals. (Though you would use honest-to-goodness nonzero infinitessimals to actually
compute the integral)
ObsessiveMathsFreak said:
I don't know about infinitesimals, but I do tend to insist on my measures being present, because without them, the integral isn't well defined. For example;
\int_{\sigma} \acute{w}
...is not a well defined quantity, because you havn't specified any orientation!
...
By itself, the form does not specify a measure.
Yes you have! Remember that you don't integrate over
n-dimensional submanifolds -- you integrate over
n-dimensional
surfaces (or formal sums of surfaces). Surfaces come equipped with parametrizations, and thus have a canonical orientation and choice of
n-dimensional volume measure.
If
c is our surface, then
by definition:
<br />
\int_c \omega = \int_{[0, 1]^n} \omega(<br />
\frac{\partial c}{\partial x^1}, \cdots, \frac{\partial c}{\partial x^n})<br />
\, dV<br />
where
dV is the usual volume form on
Rn. This is, of course, also equal to
\int_{[0, 1]^n} c^*(\omega)
on the parameter space, and there we could just take the obvious correspondence between
n-forms and measures.
The properties of forms allow you to get away without fully specifying which parametrization to use... but you still have to specify the orientation when you write down the thing over which you're integrating.