How Can I Visualize the Exterior Derivative 'd' in Differential Geometry?

  • Thread starter Thread starter r16
  • Start date Start date
Click For Summary
The discussion centers on the challenge of visualizing the exterior derivative 'd' in differential geometry, particularly for visual learners. Participants suggest using the coboundary operator from algebraic topology as a more accessible analogy, emphasizing the importance of understanding differential forms through simple cases. The exterior derivative is described as an operator that maps p-forms to p+1-forms, with interpretations related to gradients, curls, and divergences in vector fields. There is a consensus that while geometric intuition is valuable, grasping the algebraic connections is crucial for a deeper understanding of differential forms and their applications. Ultimately, the conversation highlights the complexities of visualizing 'd' and the need for clarity in differentiating between forms and their geometric interpretations.
  • #31
well that first definition, if you study it, is merely the adjoint of the boundary operator. i.e. to apply dw to a block spanned by three vectors say, you consider the boundary of that block, spanned by two of them at a atime, and apply w to each of those, but with a minus sign to give the right orientation of each face.

so this definition essentially forces stokes theorem to be true.

it helps if you know some algebraic topology, like boundaries and coboundaries of chains and cochains.
 
Physics news on Phys.org
  • #32
dont these people ever explain what they are doing? i would think bachman would in his geometric approach.
 
  • #33
I think it's important to have a definition independant of Stoke's theorem. Defining something to suit Stoke's theorem is somewhat circular. Defining the exterior derivative to be the wedge product of the gradient and the form is a little more straight forward, as the entity exists in its own right ratehr than having to be coupled with an adjoint or some such thing.

Edit:

I suppose an analogy might be for instance, how you would define a right angled triangle. You could define it to be a triangle whos sides a,b and c obey the rule a^2 +b^2 = c^2, pythagoreas' theorem, but this would be rather circular rule. One could imagine, that a student who learned to define a right triangle in this way, might never realize that one of the angles is 90 degrees. A right triangle should exist independant of ones knowladge of pythagoreas' theorem.
 
Last edited:
  • #34
ObsessiveMathsFreak said:
as well as the fact that this "d" means something completely different to those in "dx" and "dy"
Nope; that's the same d! (Any scalar function, such as x, is a 0-form)


A right triangle should exist independant of ones knowladge of pythagoreas' theorem.
A right triangle should exist (in Euclidean geometry) independent of one's knowledge of right triangles. :-p


(I'm going to call a triangle that satisfies the Pythagorean identity a "Pythagorean triangle", to make the following easier to say)

The three are all equivalent:
(1) You are taught about Pythagorean triangles, and it is later shown that a triangle is Pythagorean iff it has a right angle.

(2) You are taught about right triangles, and it is later shown that a triangle is right iff it satisfies the Pythagorean identity.

(3) You are taught about right triangles and about Pythagorean triangles, and it is later shown that a triangle is right iff it is Pythagorean.

The only reason to prefer one of these over the other is for aesthetic reasons; maybe you think (2) will be easier for the student to follow, or maybe you think (3) will make the proofs more clear, or maybe you think that satisfying the Pythagorean identity is very important and you want to emphasize that by using (1).


IMHO there is a lot of value in defining something to have the properties you want it to have... rather than defining it by a calculation and then trying to prove the calculation has the properties you want it to have. (Despite the fact that it usually requires a theorem to prove that the thing you defined really does exist)



If you're interested in an algebraic perspective, there's something called a derivation that encapsulates the most important properties we associate with derivatives: it's a linear map that satisfies:

D(ab) = a(Db) + (Da)b

(where the multiplications involved are whatever is appropriate for the structures of interest)


In the current situation, the exterior derivative d is simply the (most general) derivation that satisfies d(dx) = 0 for all x, and has df being the ordinary differential when f is a 0-form.
 
Last edited:
  • #35
Well I suppose it's a matter of personal preference. I prefer to define things indepenently and then show how unexpected relationships emerge from simple definitions. That way, you don't really feel like you're hemming yourself in.

Edit:
On an aside, differential forms notation is terrible. Everything is just so lax!
 
Last edited:
  • #36
ObsessiveMathsFreak said:
My definition up to this point has been Bachman's. Namely;

d\omega(V^1, \ldots,V^{n+1}) = \sum_{i=1}^{n+1} (-1)^{i+1} \nabla_{V^i} \omega(V^1, \ldots, V^{i-1},V^{i+1}, \ldots ,V^{n+1})

Which wasn't very helpful. I didn't find "d" very helpful either as it didn't really make clear that the order of the form was being increased, as well as the fact that this "d" means something completely different to those in "dx" and "dy". With d\omega = \nabla\wedge\omega you can see where the additional wedge product is coming from in things like \nabla\wedge \omega (f dx) \equiv d(f dx) = df \wedge dx

I think that the definition d\omega = \nabla\wedge\omega
is (almost) perfectly fine. That's the way *I* think about it anyway.
(only one thing, though: I find it misleading to use the nabla symbol there. Normally, we use nabla to represent the gradient operator which is not d. For example, for "f" a scalar function, df is not the gradient \nabla f that we learn about in introductory calculus. I think a more clear expression is to simply use dx^i \partial_i for "d". Then aplied on any differential form, d \wedge \omega works. For a viual interpretation, applying d basically gives the "boundary" of the form. Thinking of a one-form as a series of surfaces, if the surfaces never terminate (because they extend to infinity or they close up on themselves) then applying the extrerior derivative gives zero. )
 
  • #37
What I was thinking, and have been thinking for about a week now, is that forms should really be distinguished in some way, beyond their current, IMHO, rather loose fashion, to make it clear what they are.

You could say, put an accent onto a form, like \acute{\omega} instead of just plain \omega. Then the exterior derivative would be \acute{\nabla}\wedge\acute{\omega}.

I started doing this a while ago, as the regular notation was driving me ballistic, especially when it came to the final integration computations, when "dx" and "dx" would get mixed up.
 
  • #38
ObsessiveMathsFreak said:
Well I suppose it's a matter of personal preference. I prefer to define things indepenently and then show how unexpected relationships emerge from simple definitions. That way, you don't really feel like you're hemming yourself in.

Edit:
On an aside, differential forms notation is terrible. Everything is just so lax!
I agree with you about the notation!
Whats is your background by the way?
I have been trained a physicist in phenomenology (not a mathematical physicist) so all this stuff is pretty new to me. It's difficult not necessarily because it's new but because I have to "unlearn" a lot of things I had learned before (for example, what things I used to think of vectors before are actually differential forms, etc etc).

The main difficulties I have encountered are twofold.
First, the lack of consistency in what people call what (coming from mathematicians, that has surprised me). One example is the meaning of "dx". I keep hearing that infinitesimals don't exist and that whenever I see this symbol it is a differential form. And yet, whenever books define intergrations over differential forms, they awlways get to the point where they define an integration over differential forms as an integral in the "usual" sense of elementary calculus. These expressions *do* contain the symbols dx, dy etc. So what do they mean *there*, if not "infinitesimals"!

Another example: I has seen often df called the gradient. It has confused me immensely. Until I read a post here on the forums that clarified this: df is NOT the gradient we learn about in elementary calculus. This has been further clarified for me by reading Frankel where he emphasizes that on page 41.

My second source of difficulty is the difficulty in finding explicit examples taken from physics, with everything shown clearly. And I mean something as simple as ordinary mechanics of a point particle (no need to jump to relativistic systems or curved manifolds right away!). If I am supposed to think of the momentum of a particle as a covector, I would like to see the reasoning behind this and to see why the usual idea of a vector does not work and what is the metric in that context etc etc etc.

Anyway, just my two cents
 
Last edited:
  • #39
ObsessiveMathsFreak said:
What I was thinking, and have been thinking for about a week now, is that forms should really be distinguished in some way, beyond their current, IMHO, rather loose fashion, to make it clear what they are.

You could say, put an accent onto a form, like \acute{\omega} instead of just plain \omega. Then the exterior derivative would be \acute{\nabla}\wedge\acute{\omega}.

I started doing this a while ago, as the regular notation was driving me ballistic, especially when it came to the final integration computations, when "dx" and "dx" would get mixed up.

I agree with you. Usually it is not too bad because books usually use lower case greek letters for forms and lower case latin letters for vectors. Bu the case of dx vs dx and so on does bother me quite a bit. I have objected to that before but the reaction I have had has usually been "but there is no such things as infinitesimals! That's all archaic. The modern view is that dx, etc are one-forms!" Which has confused me enormously since integrations over forms are always, in the end, identified with integrals in the "usual" sense which *do* contain products of dx, dy, etc. And nobody seems to want to talk about *those*, which are clearly not differential forms.


And when a physicist is confused about all those issues, the assumption from the more mathematically savvy people seems to often be that it's because the physicist is being narrow-minded and is clinging to old ideas, instead of realizing that the notation and vagueness of some concepts and the lack of explicit examples make things quite difficult to learn.
 
  • #40
On notation, I agree that forms need a mark that should also denote their order. I usually write underrightarrows, like this for a 2-form:
<br /> \underrightarrow{\underrightarrow{F}} = \frac{1}{2}F_{ij} \underrightarrow{dx^i} \underrightarrow{dx^j}<br />
This works great, and has the similar notation for vectors,
<br /> \vec{v} = v^i \vec{\partial_i}<br />
Also, I don't write the wedge, but assume that, algebraicly, 1-forms always anti-commute. This obviates the problem with the exterior derivative, which is simply
<br /> \underrightarrow{d} = \underrightarrow{dx^i} \frac{\partial}{\partial x^i}<br />
and works on forms as
<br /> \underrightarrow{d} \underrightarrow{f}<br />

There's a lot more on this notation here on my wiki:
http://deferentialgeometry.org/
as well as on another PF thread.
 
  • #41
nrqed said:
I have objected to that before but the reaction I have had has usually been "but there is no such things as infinitesimals! That's all archaic. The modern view is that dx, etc are one-forms!" Which has confused me enormously since integrations over forms are always, in the end, identified with integrals in the "usual" sense which *do* contain products of dx, dy, etc. And nobody seems to want to talk about *those*, which are clearly not differential forms.

I don't know about infinitesimals, but I do tend to insist on my measures being present, because without them, the integral isn't well defined. For example;

\int_{\sigma} \acute{w}

...is not a well defined quantity, because you havn't specified any orientation!

\int_{\sigma} \acute{w} d \sigma

Is well defined, because, d\sigma, though abstract, still means that you've given the integral a measure. As you say, it's all moot anyway as to get a final answer you must include a measure, or "infinitesimal" of some kind, if only to be able to perform the integration at all! By itself, the form does not specify a measure.

I'm an applied mathematician by the way.

Edit:
Actually, I think the above should be more correctly written as perhaps:

\int_{ \sigma} \acute{w}(T_{\sigma}) d \sigma

Where T_{\sigma} denotes the tangent vectors with respect to the measure \sigma, to which of course the form must be applied in order for the form to mean anything.

Actually, on top of that I really think the point at which the form is evaluated should be included too. So
\acute{\omega} \equiv \acute{\omega}(P,V^1,\ldots,V^n)
But I digress.

And perhaps this thread needs a fork.
 
Last edited:
  • #42
garrett said:
On notation, I agree that forms need a mark that should also denote their order. I usually write underrightarrows, like this for a 2-form:
<br /> \underrightarrow{\underrightarrow{F}} = \frac{1}{2}F_{ij} \underrightarrow{dx^i} \underrightarrow{dx^j}<br />
This works great, and has the similar notation for vectors,
<br /> \vec{v} = v^i \vec{\partial_i}<br />
Also, I don't write the wedge, but assume that, algebraicly, 1-forms always anti-commute. This obviates the problem with the exterior derivative, which is simply
<br /> \underrightarrow{d} = \underrightarrow{dx^i} \frac{\partial}{\partial x^i}<br />
and works on forms as
<br /> \underrightarrow{d} \underrightarrow{f}<br />

There's a lot more on this notation here on my wiki:
http://deferentialgeometry.org/
as well as on another PF thread.
EDIT: A typo with under and over arrows was corrected.


I have to say that I like this notation very much:smile:
(I would personally still like to see the wedge products shown explicitly but I realize it's only because I am not completely fluent with all this stuff and that they are not necessary).

Garrett, I am still a bit confused by the fact that <br /> \underrightarrow{\omega} {\vec v}= - \underrightarrow{\omega} {\vec v}<br />
if I understood you correctly from the other thread. Could you tell me wheer Frankel discusses this (or Baez, or Felsager or Nakahara)? I need to assimilate this.

Thanks!
 
  • #43
ObsessiveMathsFreak said:
I don't know about infinitesimals, but I do tend to insist on my measures being present, because without them, the integral isn't well defined. For example;

\int_{\partial \sigma} \acute{w}

...is not a well defined quantity, because you havn't specified any orientation!

\int_{\partial \sigma} \acute{w} d \sigma

Is well defined, because, d\sigma, though abstract, still means that you've given the integral a measure. As you say, it's all moot anyway as to get a final answer you must include a measure, or "infinitesimal" of some kind, if only to be able to perform the integration at all! By itself, the form does not specify a measure.

I'm an applied mathematician by the way.

Edit:
Actually, I think the above should be more correctly written as perhaps:

\int_{\partial \sigma} \acute{w}(T_{\sigma}) d \sigma

Where T_{\sigma} denotes the tangent vectors with respect to the measure \sigma, to which of course the form must be applied in order to mean anything.

Actually, on top of that I really think the point at which the form is evaluated should be included too. So
\acute{\omega} \equiv \acute{\omega}(P,V^1,\ldots,V^n)
But I digress.

And perhaps this thread needs a fork.
I think that our views are convergent. The question is then what you mean by dsigma. It's clearly not a differential form here (right?). Which then shows how confusing the notation can be, as you pointed out (because I have had the feeling on these boards that whatever was written as d"something" *had* to be a differential form. That did not make sense to me but I have been chastised for this :wink: ).

So what do you mean by dsigma? I mean, there are vectors, there are differnential forms, and we can "feed" vectors to one-forms or vice-versa to get numbers. And if there is the additional structure of a metric, more can be done. So where does dsigma stand in this? Or do you see it as something completely different?

the way *I* think about this (but I have had a hard time getting people to either agree or to tell me it's wrong and why it's worng) is that there is a differential form we are integrating over. Then, in order to actually get an integral in the conventional sense, one must "feed" a vector to that one-form. The vector we feed is actually of the form dx^i \partial_i, i.e. it's a vector with components being *infinitesimals* in the usual sense.

But I think this is too ismple-mined although I don't know what's wrong with this. and I don't know why books have to *define* inetgrals over forms as integrals in the usual sense instead of simply feeding "infinitesimal" vectors.
 
  • #44
nrged said:
So what do you mean by dsigma?

Basically what I mean is that d\sigma is the variable, or variables, of integration. i.e. d\sigma \equiv dx_1dx_2 \ldots dx_n, in the sense we are normally used to it. So one example of d\sigma would be dV for volume.

It should be mentioned that on its own, d\sigma is rather meaningless. Just as \int_{\sigma} is meaningless. The two must be combined to mean anything. \int_{\sigma} \ldots d\sigma. When you are integrating you must give variables of integration and boundaries (limits) if you want to get an answer.

Some authors write integrals like this \int_{\sigma} d\sigma f(\sigma), placing the variable of integration anf the limits right next to each other to empahise their closeness. So they would write \int_0^1 f(x) dx \equiv \int_0^1 dx f(x).

I've even seen some leave out the "d" altogether and place the variable of integration in the limits, like this.
\int_{x=0}^{x=1} f(x)

nrged said:
the way *I* think about this (but I have had a hard time getting people to either agree or to tell me it's wrong and why it's worng) is that there is a differential form we are integrating over. Then, in order to actually get an integral in the conventional sense, one must "feed" a vector to that one-form. The vector we feed is actually of the form dx^i \partial_i, i.e. it's a vector with components being *infinitesimals* in the usual sense.

Hmmm... not to sure what you're getting at, but my current understanding is that the forms are being "fed" normal vectors, not infinitesimal ones. When integrating, the vectors they are fed are derivatives, but they are nonetheless regular vectors. If your asking where the variable of integration, i.e. dx_i comes from, the answer is, and this is what infuriates me, you have to throw in it yourself. There's no formality, and it's basically up in the air until you decide to chuck it in.

Lax! Lax I tell you!
 
Last edited:
  • #45
ObsessiveMathsFreak said:
On an aside, differential forms notation is terrible. Everything is just so lax!
I agree!


nrqed said:
Normally, we use nabla to represent the gradient operator which is not d.
The funny thing, there are two different usages of the nabla operator. In Spivak, volume I, he defines:

\nabla = \sum_{i=1}^n D_i \frac{\partial}{\partial x^i}

and that \mathop{\mathrm{grad}} f = \nabla f

On the other hand, in volume II, we have the (Koscul) connection for which \nabla T is, by definition, the map X \rightarrow \nabla_X T. In particular, for a scalar field, we have \nabla_X f = X(f) so that \nabla f = df.


The funny thing is -- when I was taking multivariable calculus, I got into the habit of writing my vectors as column vectors, and my gradients as row vectors... so in effect, what I learned as the gradient was a 1-form!


nrqed said:
For a viual interpretation, applying d basically gives the "boundary" of the form. Thinking of a one-form as a series of surfaces, if the surfaces never terminate (because they extend to infinity or they close up on themselves) then applying the extrerior derivative gives zero. )
There is supposed to be a duality between the exterior derivative and the boundary operator. (In fact, the exterior derivative is also called a "coboundary operator") But I think you're taking it a little too literally! I like to try and push the picture that forms "measure" things, and the (n+1)-form dw measures an (n+1)-dimensional region by applying w to the boundary of the region.


ObsessiveMathFreak said:
What I was thinking, and have been thinking for about a week now, is that forms should really be distinguished in some way, beyond their current, IMHO, rather loose fashion, to make it clear what they are.
Using the Greek alphabet, instead of the Roman one, isn't enough? :smile:


ObsessiveMathFreak said:
especially when it came to the final integration computations, when "dx" and "dx" would get mixed up.
How can they get mixed up?


nrqed said:
And yet, whenever books define intergrations over differential forms, they awlways get to the point where they define an integration over differential forms as an integral in the "usual" sense of elementary calculus. These expressions *do* contain the symbols dx, dy etc. So what do they mean *there*, if not "infinitesimals"!
The usual sense of elementary calculus doesn't have infinitessimals either. Depending on the context, it might be a formal symbol indicating with respect to which variable integration is to be performed, or it might be denoting which measure to be used... but certainly not an infinitessimal.

Even in nonstandard analysis, which does have infinitessimals, dx are still not used to denote infinitessimals. (Though you would use honest-to-goodness nonzero infinitessimals to actually compute the integral)


ObsessiveMathsFreak said:
I don't know about infinitesimals, but I do tend to insist on my measures being present, because without them, the integral isn't well defined. For example;

\int_{\sigma} \acute{w}

...is not a well defined quantity, because you havn't specified any orientation!

...

By itself, the form does not specify a measure.
Yes you have! Remember that you don't integrate over n-dimensional submanifolds -- you integrate over n-dimensional surfaces (or formal sums of surfaces). Surfaces come equipped with parametrizations, and thus have a canonical orientation and choice of n-dimensional volume measure.

If c is our surface, then by definition:

<br /> \int_c \omega = \int_{[0, 1]^n} \omega(<br /> \frac{\partial c}{\partial x^1}, \cdots, \frac{\partial c}{\partial x^n})<br /> \, dV<br />

where dV is the usual volume form on Rn. This is, of course, also equal to

\int_{[0, 1]^n} c^*(\omega)

on the parameter space, and there we could just take the obvious correspondence between n-forms and measures.


The properties of forms allow you to get away without fully specifying which parametrization to use... but you still have to specify the orientation when you write down the thing over which you're integrating.
 
Last edited:
  • #46
Hurkyl said:
The usual sense of elementary calculus doesn't have infinitessimals either. Depending on the context, it might be a formal symbol indicating with respect to which variable integration is to be performed, or it might be denoting which measure to be used... but certainly not an infinitessimal.

Even in nonstandard analysis, which does have infinitessimals, dx are still not used to denote infinitessimals. (Though you would use honest-to-goodness nonzero infinitessimals to actually compute the integral)

My apologies. I realize that I am missing something here (and the more I ask questions the grumpier I make people!) so if this is too dumb a question ignore it (insted of getting grumpier :-) ).
I haev to admit that I don't know what a "measure" is.
What *I* mean by "infinitesimals" is through the usual Riemann sum definition
\int f(x) dx \rightarrow {\crm limit}_{\Delta x \rightarrow 0} \sum f(x) \Delta x
(you know what I mean).

This is what I have in mind when I call the dx on the left side an infinitesimal. And of course, this "dx" is in the general sense, it may have nothing to do with coordinates. For example I might be calculating the electric potential due to some charge distribution in which case dx = dq.

I know that thinking of these as "infinitesimals" is considered very bad and uneducated. But if I have a continuous charge distribution and I am calculating the electric potential, say, I find it useful to think of an infinitesimal charge because then I can use the equation for the electric potential of a point charge and then sum over all those infinitesimal point charges. If this is totally wrong then I would be really interested in learning how I should go about setting up the same problem without ever thinking of infinitesimals charges and using the language of "measures" instead.

I am not being flippant at all, I admit my ignorance and lack of sphistication. I would really want to understand what a "measure" is and to see what is the correct way to think about a specific physical problem like the above one (or finding the E field of a continuous charge distribution, etc).

Regards

Patrick
 
Last edited:
  • #47
nrqed said:
Garrett, I am still a bit confused by the fact that <br /> \underrightarrow{\omega} {\vec v}= - {\vec v} \underrightarrow{\omega}<br />
if I understood you correctly from the other thread. Could you tell me wheer Frankel discusses this (or Baez, or Felsager or Nakahara)? I need to assimilate this.

They don't discuss it. And, really, I've never had a good reason to write a vector operating on a form from the right. But, if you do want to, that's the sign change you'd have to give it.

Frankel and others write the same inner product between a vector and form as
<br /> \bf{i}_v \omega<br />
It's really just a matter of notation.
 
  • #48
This is hard to believe until you play with it, but in differential geometry integration really is nothing but the evaluation of Stokes theorem:
<br /> \int_{V} \underrightarrow{d} \underbar{\omega} =<br /> \int_{\partial V} \underbar{\omega} <br />
Think about how that works in one dimension and you'll see it's the same as the usual notion of integration. :) First you find the anti-derivative, then evaluate it at the boundary.
 
  • #49
It was a light-hearted grumpy face, not a grumpy grumpy. :smile:


When we're doing a Riemann integral, the "right" imagry is that:

"I've divided my region into sufficiently small cubes, computed a value for each cube, and added them up to get something close enough to the true answer".

Even if we're doing nonstandard analysis, it's still more right to this imagry -- it's just that we have infinitessimal numbers to use (which are automatically "sufficiently small"), and are capable of adding transfinitely many of them, getting something infintiessimally close to the true answer.


The way infinitessimals are usually imagined is just a sloppy way of imagining the above -- we want to invoke something so small that it will automatically be "sufficiently close", and then promptly forget about the approximations and imagine we're computing an exact value on each cube, can add all the exact values, and the result is exactly the answer.


I've seen someone suggest a different algebraic approach to an integral that might be more appropriate for physicists, that's based on the mean value theorem. I think it works out to the following:

For any "integrable" function f, we require that for any a < b < c:

I_a^b(f) + I_b^c(f) = I_a^c(f)

and

\min_{x \in [a, b]} f(x) \leq \frac{1}{b-a} I_a^b(f) \leq \max_{x \in [a, b]} f(x)

These axioms are equivalent to Riemann integration:

I_a^b(f) = \int_a^b f(x) \, dx

And you could imagine the whole Riemann limit business as simply being a calculational tool that uses the above axioms to actually "compute" a value for the value. (at least, if you count taking a limit as a "computation")

(Hey! This goes back to the "define things in terms of the properties it should have, then figure out how to calculate" vs. the "define things via a calculation, then figure out what properties it has" debate. :smile:)



So, for your electric potential problem, I guess this suggests that you should imagine this:

You make the guess that the potential should be, say, the integral of f(x) over your region. You then observe that:

(1) The contribution to potential from two disjoint regions is simply added together.
(2) The average contribution to the potential from any particular region lies between the two extremes of f(x).

Therefore, that integral computes the potential. (2) is intuitively obvious if you have the right f(x), but I don't know how easy it would be to check rigorously. This check can probably be made easier.


To be honest, I haven't really tried thinking much this way. (Can you tell? :wink:) I'm content with the "sufficiently close" picture.
 
Last edited:
  • #50
the definition of dw is the adjoint of the boundary operator poitwise. but the stokes theorem is the global adjointness.

you have to do some thinking about it yourself.
 
  • #51
Hurkyl said:
It was a light-hearted grumpy face, not a grumpy grumpy. :smile:
ok! I am freally glad to hear that!

When we're doing a Riemann integral, the "right" imagry is that:

"I've divided my region into sufficiently small cubes, computed a value for each cube, and added them up to get something close enough to the true answer".

Even if we're doing nonstandard analysis, it's still more right to this imagry -- it's just that we have infinitessimal numbers to use (which are automatically "sufficiently small"), and are capable of adding transfinitely many of them, getting something infintiessimally close to the true answer.


The way infinitessimals are usually imagined is just a sloppy way of imagining the above -- we want to invoke something so small that it will automatically be "sufficiently close", and then promptly forget about the approximations and imagine we're computing an exact value on each cube, can add all the exact values, and the result is exactly the answer.


I've seen someone suggest a different algebraic approach to an integral that might be more appropriate for physicists, that's based on the mean value theorem. I think it works out to the following:

For any "integrable" function f, we require that for any a < b < c:

I_a^b(f) + I_b^c(f) = I_a^c(f)

and

\min_{x \in [a, b]} f(x) \leq \frac{1}{b-a} I_a^b(f) \leq \max_{x \in [a, b]} f(x)

These axioms are equivalent to Riemann integration:

I_a^b(f) = \int_a^b f(x) \, dx

And you could imagine the whole Riemann limit business as simply being a calculational tool that uses the above axioms to actually "compute" a value for the value. (at least, if you count taking a limit as a "computation")

(Hey! This goes back to the "define things in terms of the properties it should have, then figure out how to calculate" vs. the "define things via a calculation, then figure out what properties it has" debate. :smile:)



So, for your electric potential problem, I guess this suggests that you should imagine this:

You make the guess that the potential should be, say, the integral of f(x) over your region. You then observe that:

(1) The contribution to potential from two disjoint regions is simply added together.
(2) The average contribution to the potential from any particular region lies between the two extremes of f(x).

Therefore, that integral computes the potential. (2) is intuitively obvious if you have the right f(x), but I don't know how easy it would be to check rigorously. This check can probably be made easier.


To be honest, I haven't really tried thinking much this way. (Can you tell? :wink:) I'm content with the "sufficiently close" picture.

Ok...This language I can relate to. It makes sense to me (I guess that I use the word "infinitesimal because I imagine using some average value in a region and add the results from all the regions to get an approximate answer. But then I imagine going back, subdividing into smaller regions, using an average value in those regions, doing the sum, and keep going like this and see if the sum converges to a certain value. In that limit I imagine the regions becoming "infnitesimally small". Is it wrong to call them infinitesimals because one never really take the exact limit as the regions vanish?

In any case, in the language used above, what is a "measure"?

Regards

Patrick
 
  • #52
A measure is something that tells you how big (measurable) subsets of your space are. For a plain vanilla measure, you have:

The size of any (measurable) subset is nonnegative.
The size of the whole is the sum of the sizes of its parts. (For up to countably many parts)

To integrate something with respect to a measure, instead of partitioning the domain, we instead partition the range! The picture is:

We divide R into sufficiently small intervals. For each interval, we compute the size of the set {x | f(x) is in our interval}, and multiply by a number in our interval. Add them all up, and we get something sufficiently close to the true value.
 
  • #53
Hurkyl said:
Using the Greek alphabet, instead of the Roman one, isn't enough? :smile:
In my case, I've been using the greek alphabet in mathematics for so long that there is really no distinction. In fact, a lot of greek letters get used more than latin ones. I'm probably not alone here! I get the feeling this is some kind of carry over from the days when, perhaps, greek letters were harder to typeset.

Hurkyl said:
How can they get mixed up?
One is a form, one is a variable of integration. It's a pretty big difference.

Hurkyl said:
Yes you have! Remember that you don't integrate over n-dimensional submanifolds -- you integrate over n-dimensional surfaces (or formal sums of surfaces). Surfaces come equipped with parametrizations, and thus have a canonical orientation and choice of n-dimensional volume measure.

Surfaces don't always come with parameterisations, and the notation \int_{\sigma} \omega implies that \sigma is a surface with a parametrization as yet unspecified. It could be \sigma \equiv \{ (x,y,z) : x^2 +y^2 +z^2 = r^2 \} which is a well defined surface without parametrisation.

Hurkyl said:
The properties of forms allow you to get away without fully specifying which parametrization to use... but you still have to specify the orientation when you write down the thing over which you're integrating.

That's my point entirely. \int_{\sigma} \omega is simply a lax way of specifying something. There's no parameterisation, but in order to actually get down to it and evaluate the integral, you must specify a paramterisation. One can talk about orientation as well, but that's effectively a change in the parameterisation, or pull-back if you will.

This laxity really comes into focus when you come to the presentation of Stokes's Theorem, namely;
\int_{\sigma} d\omega = \int_{\partial \sigma} \omega
This notation is a potential minefield. Example:
\sigma \equiv \{ (x,y) : x^2 + y^2 \leq 1 \}
d\sigma \equiv \{ (x,y) : x^2 + y^2 = 1 \}

But of course, two people can evaluate each integral and come up with an answer that differs in sign. One might say that the paramterisation of one surface determines that of the other, but hold on! Atomically, each integral leaves one free to specify a parameterisation. If I give each side of the equation to two people, assuming they choose random orientations, there is only a one in two chance that their answers will agree, and only a one in four chance, that I will obtain answers congruent with my own.

In short, the essential problem here is that, using standard notation, a computer will be unable to evaluate the intergral of a form. If you wish it do do so, then you must give a surface complete with parameterisation. In short, you must ask it to evaluate;
\int_{\sigma} \omega d\sigma
Or, more correctly;
\int_{\phi(X)} \omega(D_X \phi(X)) dX = \int_{X} \phi^*\omega dX

Where \phi is the pullback to X that parameterises the surface. Even this is not strictly correct, as the vectors that the pullback \phi^*\omega acts on in the X domain are not specified. You can generally assume that they are the canonical directions, but again it is really too ambiguous, as the pull back need not have pulled back to such a straighforward domain at all. It should really be wriiten as

\int_{X} \phi^*\omega(\mathbf{e}_1^X, \ldots, \mathbf{e}_n^X ) dX
To make clear what you are evaluating.

Honestly, the standard notation of differential forms is like some of the roughwork scribbles you would find in the back of someones notes! Understandable only by the author, and only at the time, and only in the correct context. It's no wonder why people don't use them. They're simply not mature enough for practical application.
 
Last edited:
  • #54
the complicated notation is only used to teach all the details. in practice differential forms are more succinct than what they replace. look at maxwells equations e.g. or stokes thm in form notation as opposed to then old way


as to exact meaning of the notation in stokes,
it is in the hypothesis of stokes thm, which mathematicians should always state, that the theorem takes place on an oriented manifold, so the orientation is taken as given. that means the patametrization must use a compatible orientation.

then the theorem as stated says that the two sides of the equation are equal under ANY choice of parametrization, such that it is compatible with the given orientation, and where the orientation on the boundary is asumed compatible with that of the manifold.

what this means is also specified in the hypotheses, namely that when an oriented basis for the boundary space is given , then supplementing it by an outward (or inward) vector (it must be specified which, and I forget if it matters), then the result is an oriented basis for the manifold space.

these details are completely given in careful standard treatments such as spivak, calculus on manifolds.

if you are reading only say bachman, and if he may omit a few details, then i think it is because his goal was to introduce the main ideas to beginners, undergraduates, as gently as possible, without overburdening them with the level of precision desired by experts.

the students greatly enjoyed the exercise and got a lot out of reading it.

but if you are a professional, you need to read a profesional treatment.
 
  • #55
i am also a picky expert and if you followed the thread earlier on this book you know bachmans imprecision and errors drove me right up the wall.

but his book was a terrific success for its intended audience, namely uncritical undergrads.
 
  • #56
mathwonk said:
if you are reading only say bachman, and if he may omit a few details, then i think it is because his goal was to introduce the main ideas to beginners, undergraduates, as gently as possible, without overburdening them with the level of precision desired by experts.

I have at least one other book, Differential forms and connections by R.W.R. Darling. This one is, to say the least, unhelpful. To be fair to Bachmann, his is the only book I've seen so far which gives a geometric explanation of forms, and the only one so far that has actually explained to me what a form is. The others have various definitions that seem to go nowhere.

I was thinking about getting Spivak's book, but I don't know whether I need just Calculus on Manifolds, or the full blown set of A Comprehensive Introduction to Differential Geometry.

Edit:
The notation I was griping about above isn't at all exclusive to Bachmann. It's the standard fair as far as I can tell.
 
Last edited:
  • #57
ObsessiveMathFreak said:
One is a form, one is a variable of integration. It's a pretty big difference.
But the question is if the difference makes... er... a difference. :wink:


Surfaces don't always come with parameterisations
I'm using surface here as the higher dimensional analog of a curve.

But let's ignore the semantics -- as far as I can tell in Spivak, integrals of forms are only defined where the region of integration is built out of maps from the n-cube into your manifold.

You can generally assume that they are the canonical directions
And in Spivak that this is not an assumption -- it is part of the definition of the integral of a form.


Since the study of manifolds is just the globalization of the study of R^n, I see no problem with leaving implicit that we are using the standard structures on R^n.

It's just like how we talk about the ring R, rather than the ring (R, +, *, 0, 1)... and how we talk about the ring (R, +, *, 0, 1) without explicitly specifying what we mean by R, +, *, 0, 1, and by the parentheses notation. :smile:
 
  • #58
Hurkyl said:
But let's ignore the semantics -- as far as I can tell in Spivak, integrals of forms are only defined where the region of integration is built out of maps from the n-cube into your manifold.
...
And in Spivak that this is not an assumption -- it is part of the definition of the integral of a form.
...
Since the study of manifolds is just the globalization of the study of R^n, I see no problem with leaving implicit that we are using the standard structures on R^n.

You're absolutely right, and so is Spivak. There is no point in talking about overly general vectors, and manifolds and variables. Ultimately, we have to compute things using the standard basis in R^n, so everything is perfectly well defined using that space.

The terrible truth is, my first introduction to forms, and the main reason I'm studying them, was from Fourier Integral Operators by Duistermaat. I still haven't fully recovered, as you can tell.

mathwonk said:
it is in the hypothesis of stokes thm, which mathematicians should always state, that the theorem takes place on an oriented manifold, so the orientation is taken as given. that means the patametrization must use a compatible orientation.

By the way, thanks for that. Now I get it. The manifold has to have an orientation. But I still think, in my own mind, that including the d\sigma makes this more explicit.
 
Last edited:
  • #59
well you might want to write up your own acount of the stuff. i did that in 1972 or so when i taught advanced calc the first time. i wrote ti all, out by hand at elast 2-3 tiems, and it began to make sense to me. i had so many copies in fact i could practically give each class member his own original set of notes.

i then applied stokes to prove the brouwer fixed point therem and the vector fields on spheres theorem of hopf. i learned a lot that way.
 
  • #60
then qwe had s eminar out of spivak's vol 1 of diff geom, the one giving background on manifolds.

i think calc on manifolds is a good place to start. and its cheaper. the whole kaboodle is a bit long for me. but volume 2 is a classic. and vol 1 is nice too especially for the de rham theory. i don't know what's in the rest as I do not own them, but gauss bonnet is appealing sounding.

but i always like to begin on the easiest most elementary version of a thing.

guillemin pollack is nice but kind of a cheat as they define thigns in special ways to make the proofs easier, so as i recall their gauss bonnet theorem is kind of a tautology. i forget but maybe thbey define curvature in a "begging the question" kind of way
 

Similar threads

  • · Replies 6 ·
Replies
6
Views
7K
  • · Replies 11 ·
Replies
11
Views
5K
  • · Replies 3 ·
Replies
3
Views
4K
  • · Replies 8 ·
Replies
8
Views
3K
  • · Replies 9 ·
Replies
9
Views
3K
  • · Replies 2 ·
Replies
2
Views
2K
  • · Replies 16 ·
Replies
16
Views
7K
  • · Replies 2 ·
Replies
2
Views
3K
  • · Replies 2 ·
Replies
2
Views
6K