How Can I Visualize the Exterior Derivative 'd' in Differential Geometry?

  • Thread starter Thread starter r16
  • Start date Start date
  • #51
Hurkyl said:
It was a light-hearted grumpy face, not a grumpy grumpy. :smile:
ok! I am freally glad to hear that!

When we're doing a Riemann integral, the "right" imagry is that:

"I've divided my region into sufficiently small cubes, computed a value for each cube, and added them up to get something close enough to the true answer".

Even if we're doing nonstandard analysis, it's still more right to this imagry -- it's just that we have infinitessimal numbers to use (which are automatically "sufficiently small"), and are capable of adding transfinitely many of them, getting something infintiessimally close to the true answer.


The way infinitessimals are usually imagined is just a sloppy way of imagining the above -- we want to invoke something so small that it will automatically be "sufficiently close", and then promptly forget about the approximations and imagine we're computing an exact value on each cube, can add all the exact values, and the result is exactly the answer.


I've seen someone suggest a different algebraic approach to an integral that might be more appropriate for physicists, that's based on the mean value theorem. I think it works out to the following:

For any "integrable" function f, we require that for any a < b < c:

I_a^b(f) + I_b^c(f) = I_a^c(f)

and

\min_{x \in [a, b]} f(x) \leq \frac{1}{b-a} I_a^b(f) \leq \max_{x \in [a, b]} f(x)

These axioms are equivalent to Riemann integration:

I_a^b(f) = \int_a^b f(x) \, dx

And you could imagine the whole Riemann limit business as simply being a calculational tool that uses the above axioms to actually "compute" a value for the value. (at least, if you count taking a limit as a "computation")

(Hey! This goes back to the "define things in terms of the properties it should have, then figure out how to calculate" vs. the "define things via a calculation, then figure out what properties it has" debate. :smile:)



So, for your electric potential problem, I guess this suggests that you should imagine this:

You make the guess that the potential should be, say, the integral of f(x) over your region. You then observe that:

(1) The contribution to potential from two disjoint regions is simply added together.
(2) The average contribution to the potential from any particular region lies between the two extremes of f(x).

Therefore, that integral computes the potential. (2) is intuitively obvious if you have the right f(x), but I don't know how easy it would be to check rigorously. This check can probably be made easier.


To be honest, I haven't really tried thinking much this way. (Can you tell? :wink:) I'm content with the "sufficiently close" picture.

Ok...This language I can relate to. It makes sense to me (I guess that I use the word "infinitesimal because I imagine using some average value in a region and add the results from all the regions to get an approximate answer. But then I imagine going back, subdividing into smaller regions, using an average value in those regions, doing the sum, and keep going like this and see if the sum converges to a certain value. In that limit I imagine the regions becoming "infnitesimally small". Is it wrong to call them infinitesimals because one never really take the exact limit as the regions vanish?

In any case, in the language used above, what is a "measure"?

Regards

Patrick
 
Physics news on Phys.org
  • #52
A measure is something that tells you how big (measurable) subsets of your space are. For a plain vanilla measure, you have:

The size of any (measurable) subset is nonnegative.
The size of the whole is the sum of the sizes of its parts. (For up to countably many parts)

To integrate something with respect to a measure, instead of partitioning the domain, we instead partition the range! The picture is:

We divide R into sufficiently small intervals. For each interval, we compute the size of the set {x | f(x) is in our interval}, and multiply by a number in our interval. Add them all up, and we get something sufficiently close to the true value.
 
  • #53
Hurkyl said:
Using the Greek alphabet, instead of the Roman one, isn't enough? :smile:
In my case, I've been using the greek alphabet in mathematics for so long that there is really no distinction. In fact, a lot of greek letters get used more than latin ones. I'm probably not alone here! I get the feeling this is some kind of carry over from the days when, perhaps, greek letters were harder to typeset.

Hurkyl said:
How can they get mixed up?
One is a form, one is a variable of integration. It's a pretty big difference.

Hurkyl said:
Yes you have! Remember that you don't integrate over n-dimensional submanifolds -- you integrate over n-dimensional surfaces (or formal sums of surfaces). Surfaces come equipped with parametrizations, and thus have a canonical orientation and choice of n-dimensional volume measure.

Surfaces don't always come with parameterisations, and the notation \int_{\sigma} \omega implies that \sigma is a surface with a parametrization as yet unspecified. It could be \sigma \equiv \{ (x,y,z) : x^2 +y^2 +z^2 = r^2 \} which is a well defined surface without parametrisation.

Hurkyl said:
The properties of forms allow you to get away without fully specifying which parametrization to use... but you still have to specify the orientation when you write down the thing over which you're integrating.

That's my point entirely. \int_{\sigma} \omega is simply a lax way of specifying something. There's no parameterisation, but in order to actually get down to it and evaluate the integral, you must specify a paramterisation. One can talk about orientation as well, but that's effectively a change in the parameterisation, or pull-back if you will.

This laxity really comes into focus when you come to the presentation of Stokes's Theorem, namely;
\int_{\sigma} d\omega = \int_{\partial \sigma} \omega
This notation is a potential minefield. Example:
\sigma \equiv \{ (x,y) : x^2 + y^2 \leq 1 \}
d\sigma \equiv \{ (x,y) : x^2 + y^2 = 1 \}

But of course, two people can evaluate each integral and come up with an answer that differs in sign. One might say that the paramterisation of one surface determines that of the other, but hold on! Atomically, each integral leaves one free to specify a parameterisation. If I give each side of the equation to two people, assuming they choose random orientations, there is only a one in two chance that their answers will agree, and only a one in four chance, that I will obtain answers congruent with my own.

In short, the essential problem here is that, using standard notation, a computer will be unable to evaluate the intergral of a form. If you wish it do do so, then you must give a surface complete with parameterisation. In short, you must ask it to evaluate;
\int_{\sigma} \omega d\sigma
Or, more correctly;
\int_{\phi(X)} \omega(D_X \phi(X)) dX = \int_{X} \phi^*\omega dX

Where \phi is the pullback to X that parameterises the surface. Even this is not strictly correct, as the vectors that the pullback \phi^*\omega acts on in the X domain are not specified. You can generally assume that they are the canonical directions, but again it is really too ambiguous, as the pull back need not have pulled back to such a straighforward domain at all. It should really be wriiten as

\int_{X} \phi^*\omega(\mathbf{e}_1^X, \ldots, \mathbf{e}_n^X ) dX
To make clear what you are evaluating.

Honestly, the standard notation of differential forms is like some of the roughwork scribbles you would find in the back of someones notes! Understandable only by the author, and only at the time, and only in the correct context. It's no wonder why people don't use them. They're simply not mature enough for practical application.
 
Last edited:
  • #54
the complicated notation is only used to teach all the details. in practice differential forms are more succinct than what they replace. look at maxwells equations e.g. or stokes thm in form notation as opposed to then old way


as to exact meaning of the notation in stokes,
it is in the hypothesis of stokes thm, which mathematicians should always state, that the theorem takes place on an oriented manifold, so the orientation is taken as given. that means the patametrization must use a compatible orientation.

then the theorem as stated says that the two sides of the equation are equal under ANY choice of parametrization, such that it is compatible with the given orientation, and where the orientation on the boundary is asumed compatible with that of the manifold.

what this means is also specified in the hypotheses, namely that when an oriented basis for the boundary space is given , then supplementing it by an outward (or inward) vector (it must be specified which, and I forget if it matters), then the result is an oriented basis for the manifold space.

these details are completely given in careful standard treatments such as spivak, calculus on manifolds.

if you are reading only say bachman, and if he may omit a few details, then i think it is because his goal was to introduce the main ideas to beginners, undergraduates, as gently as possible, without overburdening them with the level of precision desired by experts.

the students greatly enjoyed the exercise and got a lot out of reading it.

but if you are a professional, you need to read a profesional treatment.
 
  • #55
i am also a picky expert and if you followed the thread earlier on this book you know bachmans imprecision and errors drove me right up the wall.

but his book was a terrific success for its intended audience, namely uncritical undergrads.
 
  • #56
mathwonk said:
if you are reading only say bachman, and if he may omit a few details, then i think it is because his goal was to introduce the main ideas to beginners, undergraduates, as gently as possible, without overburdening them with the level of precision desired by experts.

I have at least one other book, Differential forms and connections by R.W.R. Darling. This one is, to say the least, unhelpful. To be fair to Bachmann, his is the only book I've seen so far which gives a geometric explanation of forms, and the only one so far that has actually explained to me what a form is. The others have various definitions that seem to go nowhere.

I was thinking about getting Spivak's book, but I don't know whether I need just Calculus on Manifolds, or the full blown set of A Comprehensive Introduction to Differential Geometry.

Edit:
The notation I was griping about above isn't at all exclusive to Bachmann. It's the standard fair as far as I can tell.
 
Last edited:
  • #57
ObsessiveMathFreak said:
One is a form, one is a variable of integration. It's a pretty big difference.
But the question is if the difference makes... er... a difference. :wink:


Surfaces don't always come with parameterisations
I'm using surface here as the higher dimensional analog of a curve.

But let's ignore the semantics -- as far as I can tell in Spivak, integrals of forms are only defined where the region of integration is built out of maps from the n-cube into your manifold.

You can generally assume that they are the canonical directions
And in Spivak that this is not an assumption -- it is part of the definition of the integral of a form.


Since the study of manifolds is just the globalization of the study of R^n, I see no problem with leaving implicit that we are using the standard structures on R^n.

It's just like how we talk about the ring R, rather than the ring (R, +, *, 0, 1)... and how we talk about the ring (R, +, *, 0, 1) without explicitly specifying what we mean by R, +, *, 0, 1, and by the parentheses notation. :smile:
 
  • #58
Hurkyl said:
But let's ignore the semantics -- as far as I can tell in Spivak, integrals of forms are only defined where the region of integration is built out of maps from the n-cube into your manifold.
...
And in Spivak that this is not an assumption -- it is part of the definition of the integral of a form.
...
Since the study of manifolds is just the globalization of the study of R^n, I see no problem with leaving implicit that we are using the standard structures on R^n.

You're absolutely right, and so is Spivak. There is no point in talking about overly general vectors, and manifolds and variables. Ultimately, we have to compute things using the standard basis in R^n, so everything is perfectly well defined using that space.

The terrible truth is, my first introduction to forms, and the main reason I'm studying them, was from Fourier Integral Operators by Duistermaat. I still haven't fully recovered, as you can tell.

mathwonk said:
it is in the hypothesis of stokes thm, which mathematicians should always state, that the theorem takes place on an oriented manifold, so the orientation is taken as given. that means the patametrization must use a compatible orientation.

By the way, thanks for that. Now I get it. The manifold has to have an orientation. But I still think, in my own mind, that including the d\sigma makes this more explicit.
 
Last edited:
  • #59
well you might want to write up your own acount of the stuff. i did that in 1972 or so when i taught advanced calc the first time. i wrote ti all, out by hand at elast 2-3 tiems, and it began to make sense to me. i had so many copies in fact i could practically give each class member his own original set of notes.

i then applied stokes to prove the brouwer fixed point therem and the vector fields on spheres theorem of hopf. i learned a lot that way.
 
  • #60
then qwe had s eminar out of spivak's vol 1 of diff geom, the one giving background on manifolds.

i think calc on manifolds is a good place to start. and its cheaper. the whole kaboodle is a bit long for me. but volume 2 is a classic. and vol 1 is nice too especially for the de rham theory. i don't know what's in the rest as I do not own them, but gauss bonnet is appealing sounding.

but i always like to begin on the easiest most elementary version of a thing.

guillemin pollack is nice but kind of a cheat as they define thigns in special ways to make the proofs easier, so as i recall their gauss bonnet theorem is kind of a tautology. i forget but maybe thbey define curvature in a "begging the question" kind of way
 
  • #61
garrett said:
This is hard to believe until you play with it, but in differential geometry integration really is nothing but the evaluation of Stokes theorem:
<br /> \int_{V} \underrightarrow{d} \underbar{\omega} =<br /> \int_{\partial V} \underbar{\omega} <br />
Think about how that works in one dimension and you'll see it's the same as the usual notion of integration. :) First you find the anti-derivative, then evaluate it at the boundary.

This statement was a little opaque, so I'll flesh it out a bit. Integrate an arbitrary 1-form, f(x)\underrightarrow{dx}, in one dimension over the region, V, from x_1 to x_2. Stokes' theorem says this can be done by finding a 0-form, \omega, that is the anti-derivative of f:
<br /> f(x) \underrightarrow{dx} = \underrightarrow{d} \omega = \underrightarrow{dx} \frac{d}{d x} \omega<br />
and "integrating" it at the boundary, which for a zero dimensional integral is simply evaluation at x_2 minus at x_1:
<br /> \int_{V} f(x) \underrightarrow{dx} =<br /> \int_{V} \underrightarrow{d} \omega =<br /> \int_{\partial V} \omega = \omega(x_2) - \omega(x_1) <br />

This is why integrating over forms is the same as the integrals you're used to from physics problems -- the hard part, as always, is finding the anti-derivative, \frac{d}{d x} \omega = f(x).
 
Last edited:
  • #62
garrett said:
This statement was a little opaque, so I'll flesh it out a bit. Integrate an arbitrary 1-form, f(x)\underrightarrow{dx}, in one dimension over the region, V, from x_1 to x_2. Stokes' theorem says this can be done by finding a 0-form, \omega, that is the anti-derivative of f:
<br /> f(x) \underrightarrow{dx} = \underrightarrow{d} \omega = \underrightarrow{dx} \frac{d}{d x} \omega<br />
and "integrating" it at the boundary, which for a zero dimensional integral is simply evaluation at x_2 minus at x_1:
<br /> \int_{V} f(x) \underrightarrow{dx} =<br /> \int_{V} \underrightarrow{d} \omega =<br /> \int_{\partial V} \omega = \omega(x_2) - \omega(x_1) <br />

This is why integrating over forms is the same as the integrals you're used to from physics problems -- the hard part, as always, is finding the anti-derivative, \frac{d}{d x} \omega = f(x).


Since you have a very pedagogical way of explaining things, I can't resist the temptation of asking you to now explain the integral of a two form over a "surface", say. I have seen this given in several books and discussed here but I would really appreciate to see your way of presenting this (and the connection with the usual calculus definition).
I would appreciate it.
 
  • #63
nrqed said:
Since you have a very pedagogical way of explaining things, I can't resist the temptation of asking you to now explain the integral of a two form over a "surface", say. I have seen this given in several books and discussed here but I would really appreciate to see your way of presenting this (and the connection with the usual calculus definition).
I would appreciate it.

Sure. Say we want to integrate a 2-form, \underrightarrow{\underrightarrow{F}} over a little patch, V, of a two dimensional manifold, with two patch coordinates (x^1,x^2) each going from 0 to 1 over the extent of the patch. The hard part is guessing a 1-form "anti-derivative" satisfying
<br /> \underrightarrow{\underrightarrow{F}} = \underrightarrow{d}\underrightarrow{\omega} <br />
I say "a" anti-derivative rather than "the" because you can add a closed form to the anti-derivative and it will still be another good anti-derivative
<br /> \underrightarrow{\omega} \rightarrow \underrightarrow{\omega&#039;} = \underrightarrow{\omega}<br /> + \underrightarrow{d} g<br />

Once a good anti-derivative 1-form,
<br /> \underrightarrow{\omega} = \underrightarrow{dx^1} \omega_1(x^1,x^2) + \underrightarrow{dx^2} \omega_2(x^1,x^2)<br />
is found, Stokes' theorem says you can just integrate it counter-clockwise along the one dimensional patch boundary curve and that will give you the integral of the 2-form over the patch. For the coordinate patch we chose,
<br /> \int_V \underrightarrow{\underrightarrow{F}} =<br /> \int_{\partial V} \underrightarrow{\omega} =<br /> \int_{(0,0)}^{(1,0)} \underrightarrow{dx^1} \omega_1<br /> +\int_{(1,0)}^{(1,1)} \underrightarrow{dx^2} \omega_2<br /> +\int_{(1,1)}^{(0,1)} \underrightarrow{dx^1} \omega_1<br /> +\int_{(0,1)}^{(0,0)} \underrightarrow{dx^2} \omega_2<br />
which we can evaluate by using Stokes theorem again for each leg around the curve, equivalent to the way we're used to.

For example, take the 2-form to be
<br /> \underrightarrow{\underrightarrow{F}} =<br /> \frac{1}{2} \underrightarrow{dx^i} \underrightarrow{dx^j} F_{ij} =<br /> \underrightarrow{dx^1} \underrightarrow{dx^2} x^1<br />
A good anti-derivative is
<br /> \underrightarrow{\omega} =<br /> - \underrightarrow{dx^1} x^1 x^2<br />
And integrating this around the patch gives one non-zero contribution:
<br /> \int_{(1,1)}^{(0,1)} - \underrightarrow{dx^1} x^1 x^2 = \frac{1}{2}<br />
which equals the integral of our 2-form over our patch.
 
Last edited:
  • #64
a 2 form assigns an area to a parallelogram. so parametrize your surface by a map from a rectangle. then subdivide the rectangle into little rectangles.

map each little recatangle into the tangent space to your surface by the derivative of your parameter map.

you get a finite family of little rectangles in a finite set of tangent spaces to your surface, whic give a piecewise polygonal approximation to your surface.

the 2 form asigns to each of these parallelograms, an area. add those up and that appriximates the area of your surface. keep doing it with finer and finer subdivisiobs of your parametrizing rectangle and it converges to the integral of the 2 form over the surface.
 
  • #65
Yep, these two ways of integrating forms are equivalent.
 
  • #66
garrett said:
Sure. Say we want to integrate a 2-form, \underrightarrow{\underrightarrow{F}} over a little patch, V, of a two dimensional manifold, with two patch coordinates (x^1,x^2) each going from 0 to 1 over the extent of the patch. The hard part is guessing a 1-form "anti-derivative" satisfying
<br /> \underrightarrow{\underrightarrow{F}} = \underrightarrow{d}\underrightarrow{\omega} <br />
I say "a" anti-derivative rather than "the" because you can add a closed form to the anti-derivative and it will still be another good anti-derivative
<br /> \underrightarrow{\omega} \rightarrow \underrightarrow{\omega&#039;} = \underrightarrow{\omega}<br /> + \underrightarrow{d} g<br />

Once a good anti-derivative 1-form,
<br /> \underrightarrow{\omega} = \underrightarrow{dx^1} \omega_1(x^1,x^2) + \underrightarrow{dx^2} \omega_2(x^1,x^2)<br />
is found, Stokes' theorem says you can just integrate it counter-clockwise along the one dimensional patch boundary curve and that will give you the integral of the 2-form over the patch. For the coordinate patch we chose,
<br /> \int_V \underrightarrow{\underrightarrow{F}} =<br /> \int_{\partial V} \underrightarrow{\omega} =<br /> \int_{(0,0)}^{(1,0)} \underrightarrow{dx^1} \omega_1<br /> +\int_{(1,0)}^{(1,1)} \underrightarrow{dx^2} \omega_2<br /> +\int_{(1,1)}^{(0,1)} \underrightarrow{dx^1} \omega_1<br /> +\int_{(0,1)}^{(0,0)} \underrightarrow{dx^2} \omega_2<br />
which we can evaluate by using Stokes theorem again for each leg around the curve, equivalent to the way we're used to.

For example, take the 2-form to be
<br /> \underrightarrow{\underrightarrow{F}} =<br /> \frac{1}{2} \underrightarrow{dx^i} \underrightarrow{dx^j} F_{ij} =<br /> \underrightarrow{dx^1} \underrightarrow{dx^2} x^1<br />
A good anti-derivative is
<br /> \underrightarrow{\omega} =<br /> - \underrightarrow{dx^1} x^1 x^2<br />
And integrating this around the patch gives one non-zero contribution:
<br /> \int_{(1,1)}^{(0,1)} - \underrightarrow{dx^1} x^1 x^2 = \frac{1}{2}<br />
which equals the integral of our 2-form over our patch.
Thank you for taking the time to write this. It makes complete sense. Except for the very last step which I am not sure I follow. It looks as if it is simply using that the antiderivative of dx_1 x_ 1 x_2 is{1 \over 2} x_1^2 x_2 and if I was thinking in terms of "dumb physicist calculus", that's what I would do given that x_2 is kept constant along this "line".

However, if I think in terms of the formalism of forms and the equation
\int_{V} \underrightarrow{d} \omega =\int_{\partial V} \omega = \omega(x_2) - \omega(x_1)
then it's not clear to me how to proceed. I mean that d( {1 \over 2}\, x_1^2 \,x_2 ) does not give dx_1 \,x_1\, x_2 [/tex].<br /> Am I supposed to use the fact that the value of x_2 is kept fixed to &quot;set&quot; dx_2 equal to zero here? <br /> <br /> In other words, could you give me the explicit zero-form &quot;omega&quot; that you use in the last step (before even plugging in the boundary points)?<br /> I know that this is a trivial step but it still confuses me.<br /> <br /> I keep thinking that when integrating over differential forms, one actually &quot;feeds&quot; vectors along the region of integration (single vector along a line for a one-dimensional integration, pairs of vectors for an integration over a two forms, etc) and I would see why in this case feeding a vector tangent to the line going from (1,1) to (0,1) to the one-form dx_2 would give zero. But I keep being told that one does not feed any vectors to the differential forms when one integrates forms.<br /> <br /> Thank you again for your patience!<br /> <br /> Patrick
 
  • #67
You are right that x^2 is constant, 1, along the relevant curve. That's pretty much all there is to it. Plug in 1 for x^2, as you thought, and then it works as you think for a 1D integral.

What you say about \underrightarrow{dx^2} being zero along the curve is fine. A slightly more precise way of saying this is that the integral of the \underrightarrow{dx^2} component of \underrightarrow{\omega} is zero along the curve. I suppose it doesn't hurt to think of it as feeding the curve's tangent vector to the form and getting zero.
 
  • #68
remember too, not only is hard to find an antiderivative to use in calculating an integral, but sometimes they do not exist.

i.e.not all forms are "exact". exact forms, i.e. those with antiderivatives, are always "closed", i.e. d of them is zero, and the converse holds locally.
but not all forms are even closed.

exact one forms are those such that integration alonga path depends only on the endpoints, i.e. these are "conservative". these are the ones stokes thm applies to.

but for closed forms, path integration is only a homology invariant, i.e. you get the same integral if you change the path by one which is the boundary of a parametrized surface.

but for general one forms, the path integral changes when the path changes in any way. stokes is useless on these. but my description above, involving feeding pairs of vectors into, in that case a 2 form, still applies. in fact it is the definition of the integral.
 
  • #69
mathwonk said:
a 2 form assigns an area to a parallelogram. so parametrize your surface by a map from a rectangle. then subdivide the rectangle into little rectangles.

map each little recatangle into the tangent space to your surface by the derivative of your parameter map.

you get a finite family of little rectangles in a finite set of tangent spaces to your surface, whic give a piecewise polygonal approximation to your surface.

the 2 form asigns to each of these parallelograms, an area. add those up and that appriximates the area of your surface. keep doing it with finer and finer subdivisiobs of your parametrizing rectangle and it converges to the integral of the 2 form over the surface.

Thanks. Ok, that makes perfect sense to me (and as you pointed out, that works even if the antiderivative does not exact, i.e. the two forms integrated over is not exact).

This is exactly the way I have always pictured the integration of differential forms (i.e. as feeding vectors with components smaller and smaller until the sum converges) but I never understoof why books don't seem to ever say this when they get to the point of actually evaluating integrals over differential forms, they simply state that the integrals are *defined* to be the "usual" expressions of elementary calculus. They need to introduce a *definition*.

That does not seem to be necessary to me. Proceeding the way Mathwonk did, one is naturally led from the integral over a two-form (say) to the usual expression for the integral as seen in elementary calculus. It follows, without the need to introduce a definition, it seems to me. That has always left me puzzled when it seems to follow from saying that an integral over an n-form simply corresponds to "feeding" vectors to evaluate the area (or volume, etc) spanned by the vectors and subdividing until the sum converges.




another point: I know that I have been scoffed at for using the expression "infinitesimal" but to me, an infinitesimal quantity is simply the division one gets once one reaches the point where the integral converges. *That*'s what I call an infinitesimal. So the above procedure (feeding tangent vectors corresponding to finer and finer subdvisions until the integral converges) is what I have always meant by doing an integral over a two-form by feeding it vectors with "infinitesimals" components and summing over. But I have always been told that I was completely wrong in saying this. Now it seems to me that Mathwonk is describing the integration of a two-forms exactly the way I was visualizing it.
Maybe it's because people think about something else when using the word "infinitesimals"? I have been trying for months to figure out what was wrong with my reasoning. And books were unhelpful because when they get to the point of getting a number out of an integral over a differential form, they introduce a definition, without ever explaining the process descrived by Mathwonk, and the process that I had in mind.

Thanks for the comments.
 
  • #70
When thinking the standard way, I mainly just think of infinitessimals as a lazy way of dealing with tangent vectors, etc.

e.g. to be suggestive, I could use the notation:

P + v

for the tangent vector v at the point P. Then, things formally look like I'm using v as an "infinitessimal" and neglecting things at the second order. For example, I can "evaluate" a differentiable map f:

f(P + v) = f(P) + f&#039;(P) v

and in this notation, it looks like an ordinary differential approximation.
 
Last edited:
  • #71
i was also puzzled by books descriptions, so i came up with the one above on my own while teaching it. of course it pulls back via parametrization to the one in the book, but it gives more intuitive insight.

and perhaps in practice when one pulls it back via a local parematrization to an integral over a rectangle in R^2, I guess fubini's theorem reduces it to a pair of one variable integrals, which i suppose theoretically one can do by antidifferentiation.in real life i have never had to actually do a concrete integral by poarametrization. i am usually concerned with integrals of complex analytic 1 forms (hence closed and locally exact) over paths on a riemann surface, and one uses positivity properties to prove things about the matrix of integrals, such as riemann bilinear relations, that has positive definite imaginary part, ...tyhe inetersting thing is the interplay between the complex cohomology and the homology group of closed paths.

you might possibly like my book chapter on jacobian varieties and theta geometry (not so easy to find), or maybe my notes on riemann roch theorem on my webpage. the proof there uses one forms and their integrals in an intrinsic way.
 
  • #72
i was also puzzled by books descriptions, so i came up with the one above on my own while teaching it. of course it pulls back via parametrization to the one in the book, but it gives more intuitive insight.

and perhaps in practice when one pulls it back via a local parematrization to an integral over a rectangle in R^2, I guess fubini's theorem reduces it to a pair of one variable integrals, which i suppose theoretically one can do by antidifferentiation.


in real life i have never had to actually do a concrete integral by poarametrization. i am usually concerned with integrals of complex analytic 1 forms (hence closed and locally exact) over paths on a riemann surface, and one uses positivity properties to prove things about the matrix of integrals, such as riemann bilinear relations, that has positive definite imaginary part, ...


the inetersting thing is the interplay between the complex cohomology and the homology group of closed paths.

you might possibly like my book chapter on jacobian varieties and theta geometry (not so easy to find), or maybe my notes on riemann roch theorem on my webpage. the proof there uses one forms and their integrals in an intrinsic way.
 
  • #73
actually my book chapter is easier to finsd than to afford:

Lectures on Riemann Surfaces (ISBN: 9971509024)
Cornalba, M.; Gomez-Mont, X.; and Verjovsky, A. Bookseller: Booksarebeautiful
(Beaumont, TX, U.S.A.) Price: US$ 174.00
[Convert Currency] Shipping within U.S.A.:
US$ 3.50
[Rates & Speeds] Add Book to Shopping Basket

Book Description: World Scientific Publishing Company, Singapore, 1989. Hardback. Book Condition: Very Good. [ octavo - roughly 9"x6" ]. 704 pp. Proceedings of the College on Riemann Surfaces - Internationsl Centre for Theoretical Physics, Trieste, Italy 9 Nov-18 Dec 1987. Binding Tight. Text Clean. Each Section Contains Bibliographic References. Formerly Part of a Government Research Collection. Mild Ex-Library. Bookseller Inventory # 107482

maybe ill see if i have the right to post it on my own webpage.
 
  • #74
those were lectures to algebraic geometry grad students and physicists by the way, given at the International Center for Theoretical Physics in Trieste.
 
  • #75
mathwonk said:
come on guys. everyone has known the meaning of these objects for years, decades, centuries.

on functions d is the "gradient" or direction of greatest increase...

I'm confused by this statement. d of a function f is not necessarily the direction of greatest increase: rather df is the 1-form that takes a vector X (viewing it as a direction on the manifold) and returns the directional derivative of f in the direction of X, i.e. df(X)=Xf.

As for the direction of greatest increase, wouldn't it necessarily be a direction X_0 such that df(X_0) is greater than or equal to df(X) for all X in the tangent space at that point subject to some restriction like |X|=1?

Actually, though, the concept of the covariant derivative of an n-form is technically only as old as the concept of an n-form, which is itself only about a century and a quarter old. It certainly is a nice abstraction of several advanced calculus ideas, though, which are themselves several centuries old.
 
Last edited:
  • #76
Doodle Bob said:
I'm confused by this statement. d of a function f is not necessarily the direction of greatest increase: rather df is the 1-form that takes a vector X (viewing it as a direction on the manifold) and returns the directional derivative of f in the direction of X, i.e. df(X)=Xf.

As for the direction of greatest increase, wouldn't it necessarily be a direction X_0 such that df(X_0) is greater than or equal to df(X) for all X in the tangent space at that point?

Actually, though, the concept of the covariant derivative of an n-form is technically only as old as the concept of an n-form, which is itself only about a century and a quarter old. It certainly is a nice abstraction of several advanced calculus ideas, though, which are themselves several centuries old.

I *think* that Mathwonk (and some books) is implicitly identifying df (with component \partial_i f ) to the gradient vector (with components g^{j,i} \partial_i f). Which is why some books call "df" the "gradient". I have to say that this has confused greatly me for quite a while.
 
  • #77
nrqed said:
I *think* that Mathwonk (and some books) is implicitly identifying df (with component \partial_i f ) to the gradient vector (with components g^{j,i} \partial_i f). Which is why some books call "df" the "gradient". I have to say that this has confused greatly me for quite a while.

It would still be incorrect then. df should be a 1-form, i.e. it eats a vector and gives back a scalar. Please keep in mind that I am thoroughly a Riemannian geometer so when I see "vector" I think a linear combination of \{{\partial \over \partial x_i}: i=1,...,n \}.

Ah, but I see what MW is getting at: switch each {\partial \over \partial x_i} to dx_i and we do get df:

df=\Sigma_{i=1}^n {\partial f\over \partial x_i}dx_i
 
  • #78
One aspect of much of this theory that makes it difficult is that there really are things that are near impossible to visualize. Much of differential form theory is meant to generalize various aspects of 3-dimensional analytical geometry, such as grad, div, and all that.

But you can't see a 1-form. You can imagine consequences of one, though. A 1-form, for example, will have a large kernel (zero set). So, a global 1-form on a manifold is equivalent (up to scalar constant) to a subbundle of codimension 0 or 1 of the tangent bundle at each point. If you're studying a 3-dimensional manifold, this means that each point of the manifold there are at least 2 directions towards which the 1-form is zero (3 if the 1-form is identically zero at that point).

I am having a similar problem with my students right now, who are all middle school teachers. I am teaching them isometries of the plane, and they are uncomfortable with treating the transformations as objects in their own right, since you can't really draw a transformation like you can draw a line or a point. You can only draw the consequence of a transformation and imagine the rest.
 
  • #79
Doodle Bob said:
It would still be incorrect then. df should be a 1-form, i.e. it eats a vector and gives back a scalar. Please keep in mind that I am thoroughly a Riemannian geometer so when I see "vector" I think a linear combination of \{{\partial \over \partial x_i}: i=1,...,n \}.

Ah, but I see what MW is getting at: switch each {\partial \over \partial x_i} to dx_i and we do get df:

df=\Sigma_{i=1}^n {\partial f\over \partial x_i}dx_i
Well, that's what I meant by saying that the *components* were the expressions I gave. What I meant is that df is (\partial_i f) dx^i and that the gradient is g^{ij} (\partial_i f) \partial_j.
 
  • #80
in a situation where a metric < , > has been given, dfp(v) =
<gradfp, v>.IN THIS SITUATION, i.e. in all of riemannian geometry, every one form arises as dotting with the vectors of some unique taNGENT VECTOR FIELD, SO THERE IS no great difference. (drat this keyboard).

of course when i choose to differ with someone else, i pick on every technical detail in their sentences.

but i do not think there is a significant difference between calculus of n forms and calculus of 1,2,3, forms.
when i am arguing my controversial positions i choose to give myself great latitude.
 
  • #81
and i can see one forms, there's one right there: df.
 
  • #82
What about two forms in four dimensions?

d\mathbf{F} = 0

Good luck visualising that! :E
 
  • #83
whats the big deal? My thesis was on the structure of a mapping from the 15 dimensional moduli space R6 of genus 6 curves with double cover, to the 15 dimensional moduli space A5 of principally polarized abelian varieties of dimension 5.

I focused especially on the normal structure in A5 of the 12 dimensional locus J5 of jacobians of genus 5 curves, and the normal strucxture in R6 of the fibers of the map.

to understand such a 5 dimensional jacobian, i.e. one "point" of J5, one analyzes the singular curve of its 4 dimensional theta divisor.

I admit it seemed hopeless when I staretd but after a while yopu get better at visualizing things.

the main methods are called
"section and projection", byt the great italians. i.e. slicing your high dimensional object into lower dimensional slices, and projecting it down onto a smaller space.
 
  • #84
haven't you ever tried to picture a 4 dimensional sphere? with time as a coordinate? i.e. as a dot expanding into a bubble that keeps growing and then begins shrinking again until it becoes a dot and vanishes again?

i use this all the time in my elementary elctures.

just remind people that it is not at all hard to escape from the classroom without injury or opening a door, just by going back in time until before the building was built, and stepping out side the walls, then coming back to the present.

you might be surprized what you can visualize after a little effort. infinite dimensional space gives me a little more trouble.
 
  • #85
in 4 space just pretend a pair of rectangles is "disjoint", i.e. like the ones spanned by e1,e2 and e3,e4.take a 4 diml rectangular parallelepiped and look at one vertex. then take the 4 faces at that vertex in pairs. those 6 pairs span your vector space of 2 chains. a typical 2 form assigns an arbitrary number to each of those pairs.

or homogenize your spaces, i.e. consider instead of euclidean 4 space, the projective 3 space consisting of all lines through the origin of euclidean 4 space. then a 2 plane spanned by two lines through the origin of 4 space becoems a "line" in projective space spanned by two "points", each represented by a line, in projective 3 space.

thus the vector space of all linear combinations of 2 planes through the origin of euclidean 4 space projectivizes to become the projective space P^5, and in it there is a hypersurface representing all lines in projective 3 space.

A 4 dimesional 2 form, becomes a one form on this space of 2 cycles, via this "grassmannian embedding". so by viewing the 2 cycles as points of 6 dimensional space, 4 dimensional 2 forms become (6 dimensional) one forms?

how do you like them apples?
 
  • #86
another way to look at 2 forms, or ay other forms, is as subdeterminants, or volumes of projections.

picture a 2 diml rectangle sitting in 4 space, and project it onto each of the 6 pairs of coordinate planes, ans take the area of the 6 projected recrtangles. that gives you the vaklues of the 6 basic 2 forms dxdy, dxdz, dxdw, dydz, dydw, dzdw. on that rectangle. an arbitrary 2 form is a linear combination of those basic ones.

put another way, a rectrangle in 4 space is a pair of 4 dimly vectors, or a 4 by 2 matrix of numbers. then taking the determinants of all 6 2 by 2 subdeterminants is anoither way to view the areas of those 6 projections.

indeed if you let those 6 areas or 6 subdeterminants be themselves the coordinates of a vector in 6 space, then you have represented your rectangle in 4 space as a vector in 6 space, as i suggested above.

i.e. the rgqasmannian embedding just uses as coordinates the basic areas of projections. now vice versa, a vector in that 6 space determines a one form, since given any rectangle in 4 space, we can take its 6 projected areas and dot the resulting vector with the given vector geting a number.

interestingly, not all such one forms arise dually to asingle rectangle, i.e. sometimes you have to use linear combinations of rectabgles. and a given vector does arise as coming from one rectangle if the coordinates satisfy a certain quadratic equation, and that why the space of liens in projective 3 space embeds as a quadric hypersurface in projective 5 space th:biggrin: is way.
 
  • #87
try reading the last 10 pages or so of the graduate algebra notes on my webpage, math 845-3, pages 50-61, on alternating tensors and exterior products.
 
Last edited:
  • #88
mathwonk said:
come on guys. everyone has known the meaning of these objects for years, decades, centuries.

on functions d is the "gradient" or direction of greatest increase, on one forms d is the "curl" of a vector field or its tendency to rotate at a point, on 2 forms, d is the "divergence" of avector field, or the edxtent to which it expands out from a point or to which that point is a "source".

read the intro to maxwells electricity and magnetism.

this is an example of the loss of understanding that comes with modern definitions.

we are all physicists here right?

Can you download maxwells electricity and magnetism somewhere?

Im trying to show that *d*E correspond to the divergence of a vector field.
 
Last edited:
  • #89
hope this appears

i do not know where to download maxwell, as i have oned a copy for many years.


the frustration in this whole communiucatiion as i reread it fter weeks away,

is the back and forth between bigpoints and tiny points.


in trying to explain soemthing, one starts out big, i.e. what is the problem we are trying to solve, and what is the idea used to solve it?

then how doe make this idea precise, and how do we define it carefully and calulate with it.

in many of my posts i give a biog ideq exdplanation, and then someone comes along with a tiny objection to it.


people are very confused about the distinctioin between a vector anda covector, but if a dot product is given there is les difference.

i.e. given a vector, dotting with it is a covector. still this duality is the entire difference between forms and vectors, so probably should be respected.


cohomology was invented to give a topological version of differential forms, not before them, so it seems odd to use cohomology to explain forms, but maybe it does not hurt.


the whole subject is about the distinction between geometric objects, and functions on those objects.

then there is the calculus, bringing in the relation between locally defiend objects and functions, and their integrals, or globally defiend objects and functions.


take a smooth curve. it has two endpoints, sayp and q, and hence has a

"boundary" q-p, which is an algebric gadget called a "0 cycle".

now take a function f on points, and define a coboundary df to be the function on curves whose value at a curve C is the vakue of f on the boundary of C, i.e. (df)(C) = f(q)-f(p).

this object df is to f dually as q-p is to C.

we can go up in dimension, and define the bundary of a surface, and the coboundary of a function on curves, to be th vaue of tht function on the boiundary ofa surface.

notice that coboundaries always vanish on geometric objects that have empty boundaries. moreover, it is basic that the boundary of a boundary is always zero.
e.g. the boundary ofa disc is a circle, which ahs emopty bundary.
thus the vanishing of a coboundary on a geometric gadget, is a necessary condition for that gadget to be itselfa boundary.

e.g. in calculus, the unit circle is nota bundary in the unctured plane, because the angle form dtheta, defined in the punctured plane, does not vanish on the unit circle.

but i am geting head of my story. we want to calculate these things locally. so we approximate curves everywhere locally by tangent lines.


then we have a boundary operator on tangent vectors, and a coboundary operator on covectors.

upping the diemnsions, we have p diemnsional blocks of tangent vectors, and a bundary operator that sends them to sums of p-1 dimensional blocks.
then we have (p-1) covectors and cobounddaries of these.

then the whole calculus comes and and says that if we define the "pform" dw, to be at each point the coboundary of the (p-1) covector at that point, then inbtegrating gives us the global coboundary of the geometric nature first discussed.


i.e. there is a notion of global boundary and coboundary for smooth geometric objects. then there is a linear notion of these things at each tangent space. then stokes theorem says that integrating the local linearnotion over the whole manifold, gicveas the global notion.


this is nota tautology, since it involves limits of approximations, but it is no more scary than the FTC, which ti reduces to by fubini.


now miost fo th questions here are entirely technical ones, about how the specific definitions and notations do or do not measure these thigns, actually the uestions here mostly fail to even notice the actual content of these defintions.

forgive me if my responses are unuseful, but i tend to try always to point out what is being attempted bya definition, assuming that once that is understood well, the nuts and bolts of whether it succeeds is easy homework.

vector fields, and dot products are a method of rendering one forms visible. i.e. a one form is indeed something that integrates against a smooth parametriozed curve.

but to se them, physicists use vector fields, visible force fields, families of arrows drawn in space. then they dot the velocity vector of th3 parametrized curve against the arrow at each point, and then do the integral.

thus dotting against a visible arrow or vector, gives a one form. this simple interplay explains why ,many people say "gradient" to refer to the vector direction of greatest increase of a function. namely dotting this direction against a smooth path, gives the directional derivative of that function in that direction, hence gives the vaue of the one form df on that velocity vector.

yes of course v diffrs technically from the action <v, >, but this only matters after onen understands th whole puprpose of the action is about. hence i try to explain that first, and the details later.


by the way maxwell calls it the "convergence" (ratehr minus it) because he works with quaternions nstead of vectors, so i^2 = -1 introduces a minus sign in his calculations.

a good place to look up thigns lijke stardstar, is springers riemann surfaces, where he givesa complete ntroduction to calculus of forms and metrics, and hodge operators...
 
  • #90
r16 said:
I downloaded and read the passage out of that book, however i feel i comprehended very little of it except the analogy to the trails and the circuts. I have practically no experience in topology, group theory, or lie algebra, so I was quite confused.

Several texts have good explanations / visualizations of the fundamental theorm of exterior calculus:

\int_R d \alpha = \int_{\partial R} \alpha

with examination of the special case

\int^b_a df = f(b) - f(a)

not to nitpick, but that is not quite right. if you are integrating over a curve C, the boundary of the curve will be the endpoints a and b. It should be:

\int_C df = \int_{a,b} f = f(b) - f(a)

...and you have the fundamental theorem of calculus. notice the last integral has no "df"
 
Back
Top