Musings on the physicists/mathematicians barrier

1. Jul 25, 2006

nrqed

After having spent some time trying to learn differential geometry and differential topology (my background is in physics phenomelogy) I cant help making the following observation.

I think it is harder to learn the maths starting from a background in physics than learning the math from scratch (i.e. being formed as a mathematician. And the reason is that in *addition* to learn the math concepts, someone with my background feels the need to make the connection with everything he/she has learned before. That's a normal thing to do. If the maths are so powerful and more general, everything that was known before should be ''expressible'' in the language of this new and more powerful formalism.

And this is when one hits almost a brick wall. Because a common reaction from the more mathemically inclined and knowledgeable people is to reject off-hand everything the physicist has learned (and has used to make correct calculations!!) as being rubbish and almost infantile.
But that just creates frustration. Because the physicist has done thousands of calculations with the less sophisticated concepts so its not possible to scratch everything as being wrong and start with a totally independent formalsim an dnever make the connection. Thats the main problem, there seems to be almsot some contempt from many (surely not all) people more well versed in pure maths toward simple physics. And yet, it feels to me that mathematicians should be very interested in bridging the gap between the pure an dmore abstract aspects of maths and physics calculations.

I dont mind at all realizing that I get something correct by luck because I am doing something that works only a sa special case, for example. Thats the kind of thing that I *actually* want to see happening when learning more advanced maths so that I can see that I was limited to special cases and I can see how the maths allows me to go further.
But if I am told flatly that everything I have used before is plain wrong, this is hard to understand and creates a huge barrier in understanding a new mathematical formalism which seems then completely divorced from any actual practical calculations.

The exmaple that comes to mind first is the physicist view of infinitesimals.

I am running out of time on a public terminal but will write more what I mean in later post, if this one does not get pulled .

I better run for cover

2. Jul 25, 2006

ObsessiveMathsFreak

I have studied the sum and entirety of differential forms, and have thus far found little of use in them. The generalised Stoke's theorem was nice, but only just about worth the effort.

My opinion, for what it's worth, is that differential forms is simply not a mature mathematical topic. Now it's rigourous, complete and solid, but it's not mature. It's like a discovery made by a research scientist that sits, majestic but alone, waiting for another physisist or engineer to turn it into something useful. Differential forms, as a tool, are not ready for general use in their current form.

There's not a lot that can save the topic from obscurity, given its current formulation. Divorced from physics, the study of forms becomes an exercise in fairly pointless abstraction. The whole development of forms was likely meant to formalise concepts that were not entirely clear when using vector calculus alone.

Let me explain. The units of the electric field E, are in Volts per metre, V/m. The units of electric flux, D, in columbs per metere squared, C/m^2. E is measured along lengths, lines, paths, etc. D is measured across areas, surfaces, sheets, etc. Using vector calculus with the definition $$\mathbf{D}=\epsilon \mathbf{E}$$, it's not clear why one should be integrated along lines and the other over surfaces(unless your a sharp physisist). However, defining E as a one-form, and D as a two-form, makes this explicit. A one-form must be evaluated along lines, and a two-form must be evaluated over surfaces.

Does this reasoning appear anywhere in any differential form textbook? No. Not even is it mentioned that certain vector fields might be restricted to such evaluations. Once the physics is removed, there is little motivation for forms beyond Stoke's theorem, which could probably be proved by other methids anyway. There is in the main, a derth of examples, calculations, reasoning and applications, beyond the rather dire presentations of the Faraday, Maxwell and four current. All that effort to reduce Maxwell's equations from five to three, is frankly embarrassing.

In short, the subject is not mature. Certainly not as mature as tensor analysis, and in no possible way as mature as vector calculus. It's lack of use supports this conclusion. Engineers, physicsts, and indeed mathematicians, cannot be expected to use a method that is not yet ready to be used. There is no real justification for learning , or applying this method when the problem can be solved more expiediently and more clearly, using tensor or vector calculus.

The primary problem is the notation. It just doesn't work. Trying to pass off canonical forms as a replacement for variables of integration simply is not tenable, and proponents do not help their argument by making fast and loose conversion between the two, totally unsupported by any formalism. The classic hole the notation digs for itself is the following:
$$\iint f(x,y) dxdy = \iint f(x,y)dydx$$
$$\iint f(x,y) dx\wedge dy = - \iint f(x,y)dy\wedge dx$$
And the whole supposed isomorphism breaks down. This is not good mathematics.

I don't think differential forms are really going to go places. I see their fate as being that of quaternions. Quaternions were origionally proposed as the foremost method representation in physics, but were eventually superceeded by the more applicable vector calculus. They are still used here and there, but nowhere near as much as vector calculus. Forms are likely to quickly go the same way upon the advent of a more applicable method.

3. Jul 26, 2006

ObsessiveMathsFreak

The topics you mention are relatively esoteric, and highly mathematical. The purpose of my post was to emphasise that differential forms have not found their way into the applied mainstream. Electromagnetics, fluid dynamics, etc, are all still dominated by vector calculus. As nrqed mentioned, the expression of physical problems through differential forms is simply not done to any great degree.

As a mathematical tool forms are not as usable as other methods. There are many pitfalls and potential sources of confusion embedded in the notation and framework. Again, the reculance of the applied communities to use the method is a testament to its immaturity. We may have different definitions of maturity here, but my own is that the method must be ready for practial use.

I think the trouble stems from the treatment of forms as an integrand and a variable of integration when it is quite clear that they are not. There seems to be a lot of confusion about this point amoung the community which again can be traced back to notation. The notation is confused and relies upon the user selecting, sometimes by chance, the correct relationship between canonical forms dx and variables of integration dx. This a real mess, and isn't ready for mainstream application.

4. Jul 26, 2006

HallsofIvy

Staff Emeritus
Can someone explain to me the MATHEMATICAL content of this? If not, I will delete the thread.

5. Jul 26, 2006

nrqed

well, it was partly to open up the discussion between the language of physicists and mathematicians but I was not really expected a much different reaction. Does anyone know a board/forum on the web where mathematicians are open-minded to relating advanced concepts of maths to the language used by physicists? I would appreciate the information.

Well, I was going to ask to connect with physics.

For example, people say that a one-form is something you integrate over a line. And that a two form is something that one integrates over a surface. But things are not so simple!!

In E&M, for example, one encounters the integrals of the E field over a line ($\int {\vec E} \cdot {\vec dl}$) in Faraday's law but one also encounters the surface integral $\int {\vec E} \cdot {\vec A}$ in Gauss' law. And the same situation appears with the B field.

Now, I realize that using Hodge's dual, one can go from forms of different degrees, etc. But usually math books will say that the E field is really a one form and that the B field is a two form, without explaining why.

This is one type of problem that I was alluding to.

Another one is the use of infinitesimals. It seems to be the consensus that the concept of infnitesimals is a completely useless one and that everything should be thought as differential forms. (I am still wondering about a comment in the online book by Bachman where he says that not all integrals are over differential forms, btw)

Consider the expression $df = \partial_x f \, dx + \partial_y f\, dy + \partial_z f \, dz$.
The view is usually that this makes only sense as a relation for differential forms. Of course, the way a physicist thinks of this is simply as expanding $f(x+dx, y+dy, z+dz) - f(x,y,z)$ to first order in the "small quantities" dx, dy and dz. I still don't understand what is wrong with this point of view.

At first it might seem that differential geometry has for goal to eliminate completely the concept of "infininitesimal", but of course they reappear when defining integrals anyway, as Obsessive pointed out.
Not only that, but it seems to me that the concept of infnitesimals is still all over the place, as part of the derivatives. For example, what does one mean by $\partial_x f$? If not the limit
$$lim_{\Delta x \rightarrow 0} { f(x + \Delta x) - f(x) \over \Delta x}$$
? It is understood that delta x is taken small enough that this expression converges to some value.

So why can't one think of $f(x+dx, y+dy, z+dz) - f(x,y,z)$ in the following way : compute $f(x+\Delta x, y+ \Delta y, z+\Delta z) - f(x,y,z)$ and take the delta smaller and smaller until the dependence on them is linear. *That* is my definition of infinitesimals. But I know that the "small Delta x" limit in the partial derivatives is well accepted but it is rejected as being totally wrong for something like df.

Anyway, that's the kind of questions I wanted to discuss but I realize that it is not welcome here. Physicists can't understand maths, right?!?!
what I was trying to point out in my first post was that the difficulty is NOT mainly in understanding the maths. I can sit down with a math book and just follow the definitions and learn it as a completely new field. The difficulty, for a physicist, comes when trying to connect with one's previous knowledge.

But, as expected, this is deemed irrelevant and not of much worth here.

regards

Patrick

Last edited: Jul 26, 2006
6. Jul 26, 2006

Mickey

I think the barrier between physicists and mathematicians is more of a language barrier than anything else.

One might think, hey, don't they both speak the language of mathematics, the language of nature? (Some of you may already know what I think of that).

Mathematics is a consistent formal system, so it must be different from the language used to communicate it, because that language is inconsistent.

Notation plays a large role in communicating mathematics. The rules of mathematical notation are inconsistent, not just between groups of people, but within groups of people between different topics in mathematics (even if they may be consistent within topics). For example, tensor analysis uses superscript to distinguish different coordinates, but algebra ordinarily uses the subscript to distinguish different coordinates and superscript to denote exponents. The notation of tensor analysis may be consistent within tensor analysis, but not with the notational conventions of other mathematical topics.

Within the topic of tensors, mathematicians and physicists adopt differing conventions as well. Einstein, who we could say was initially much more physicist than mathematician, adopted the summation convention, or the omission of summation signs in favor of an assumption regarding the positions of a letter in both superscript and subscript. This convention allows the physicist to refer specifically to a coordinate system, whereas the mathematician's notation is independent of a coordinate system. Penrose believes this supposed conflict between mathematicians and physicists are resolved by the convention known as the abstract-index notation (and that the conflicts of abstract-index notation are resolved by diagrammatic notation). He talks about all of this in Chapter 12 of "Road to Reality."

I remember a scene from "The Mechanical Universe" videos where Goodstein said that, while struggling with GR, Einstein said that he had a newfound appreciation for mathematicians and what they do. Einstein had to account for all the rules and nagging little exceptions to the rules in order to make everything consistent. Goodstein used the opportunity to say that, although physicists help us understand the universe, mathematicians are the "guardians of purity of thought."

So, when you feel you've hit a brick wall, think of it as learning the language of the guardians.

Last edited: Jul 26, 2006
7. Jul 26, 2006

ObsessiveMathsFreak

But, strictly speaking, one should integrate the electric flux D over surface integrals. Forms make this more explicit by enabling you to define E and D in such a way as each can only be integrated over the correct type of manifold, i.e. curve, surface or volume. D is a two form, and is in fact the Hodge dual of E if you wanted to be more "concise" about things.

Mathematically there is no explanation whatsoever. E and B are simply vector fields in vector calculus. The reason comes only from the physics. Physically speaking, the reason is that E is the electric field and B is in fact, the magnetic flux. It's units can be measured in Webers per metre squared, Wb/m^2, so it must be evaluated as a flow through areas, so strictly speaking, it's a two form. It's "dual" is the magnetic field H, which is a one form like the electric field.

This might be consider a matter of extreme pedantics, paticularly when the fields and fluxes typically differ only by constants $$\epsilon$$ and $$\mu$$. But sometimes you need to be pedantic. In my case, this is useful as I am working with materials in which the permeability an d permittivity are not constant. Your milage may vary.

I understand what you mean by infinitesimals to be variables of integration dx, dy, dz etc. You seem to have been introduced to variables of integration from the point of view of riemannian sums, i.e. $$\int f(x) dx = lim_{\Delta x \rightarrow 0} \sum f(x_i) \Delta x$$. Strictly speaking, dx is not an infinitesimally small $$\Delta x$$, but is rather an operator applied to a function to obtain an "anti-derivative", i.e. to integrate something. Similarly, strictly speaking, dy/dx is not an infinitesimally small ratio, but is the operator d/dx applied to the function y(x), i.e., $$\frac{d}{dx}\left(y(x)\right)$$.

However, your view is not entirely wrong, as when it comes down to the final solution of many physical problems, numerical estimates of integration and differenciation are used, and dx and dy do become approximated by $$\Delta x$$ and $$\Delta y$$.

As to the point of view that every integration should be throught of as a differential form, or taken over differential forms; this is clearly nonsense. Differential forms are ultimately reduced to integral equations once they are applied to specific manifolds, i.e curves or surfaces, etc depending on the form. They are no more a replacement for integration than integration is a replacement for addition.

Remember, df is an operator on vectors, and has nothing to do with variables of integration or infinitesimals except that it is written the same way, and that the two are often interchanged in a rather flippant matter to convert a "differential form integral" into an integral proper, but as I've said above, this conversion is frought with peril. The form dx is not a variable of integration, or an infinitesimal. It's an operator applied to vectors. You have to tack on the "right" variable of integration later.

Variables of integration "dx" are operators applied to integrands, and in fact the integrands in this case are differential forms. The full equation is in fact;

$$\int f(x) dx(\vec{V}(x)) dx$$
Here the first dx is a form, and the second is a variable of integration. this is slightly clearer in the following.
$$\int f(t) dx(\vec{V}(t)) dt$$
Here the variable of integration "x" has been replaced with a "t"

Forms are operators on vectors. Variables of integration are operators on integrands. The two are not the same, and the only reason people are lead to belive so is due to poor notation.

Last edited: Jul 26, 2006
8. Jul 26, 2006

nrqed

Very interesting.

So from this point of view, one should think of E and D as being simply proportional to each other, there is truly a deep difference. To do E&M on a curved manifold, for example, the simple proportionality relation that physicists are used to would break down then? Or could one see this even in flat manifold but by going to some arbitrary curvilinear coordinate system? (I kow this woul be answered by looking at how the Hodge dual depends on a change of coordinate system). *This* would be the kind of insight that would make the differential form approach to E&M much more interesting!

Ok. Interesting. What is the way to formalize this? Is dx (say) the operator or should one think of $\int dx$ as the operator?

That makes sense to me, except I am wondering how, in this approach, one goes about finding any derivative. For example, how does one prove that $$\frac{d}{dx} \left(x^2 \right) = 2 x$$?
If one defines d/dx as an operator, how does one find how it acts on anything? And if the only way to find an explicit result is to go through the limit definition, then isn't this tantamount to say that the definition of the operator *is* the limit?
Ok. That's good to hear. Because books sometimes say (not formally) that differential forms are the things we integrate over!
Ok. It's nice to hear this said explicitly!
That's clear (and I wish books would say it this way!!!!). The question is then how is the vector chosen? I mean, the way it is usually presented is as if $dx({\vec V})$ is always equal to one (or am I missing something?).

Regards

Patrick

Last edited: Jul 26, 2006
9. Jul 26, 2006

ObsessiveMathsFreak

On a curved manifold embedded in euclidean space, the proportiality relation is still fine. I'm not sure what happens in curved spacetime.

However, in certain materials, D is not linearly proprotional to E, and may not in fact have the same direction. And of course, if the electric permittivity is not constant, for example if the range of your problem encompassed different materials, then the proportiality constant would not be strictly correct either.

In any case, the flux must only be evaluated through surfaces, and the field only along curves. You can get away with this using vector calculus if you are very careful, or if it's not vital to the problem, but differential forms make this more explicit.

D is also known as the polarization density and B as the magnetic flux density if that's any help. These are densities per unit area, and so must be "summed" or integrated over areas to get the overall flux through that area. If you go back an examine the SI units of each of the quantities, E, D, H, B, $$\rho$$, J, etc, you will see which are zero, one,two and three forms, simply by noting which are expressed in metres, metres squared, metres cubed and of course metres^0(no metres in the units).

$$\int dx$$ is the operator. The variable and the sign must be taken together. On their own, each is relatively meaningless. It's just the way things are done. The integral sign usually denotes the limits, making the whoel thing a definite integral.

Yes, the definition of the d/dx operator is the limit.
$$\frac{d}{dx}(f(x)) = lim_{\Delta x \rightarrow 0}\frac{ f(x+ \Delta x) - f(x)}{\Delta x}$$
But please remember that the dx in d/dx is not at all the same thing as the dx in $$\int dx$$. Of course, when people work with differntial equations such as dy/dx = g(x) becoming $$\int dy = \int g(x) dx$$ often the dx is treated like a variable, and appears to be the "same thing", but in reality the two perform totally different operations.

This distinction is often hidden or unstated, but for example, you would never do the following: ln(dy/dx) = ln(dy) - ln(dx). I think you would agree instinctively that this is somehow wrong. Another example might be that $$\frac{d^2 y}{dx^2} = g(x)$$ and $$(\frac{dy}{dx})^2 = g(x)$$ are two very different equations.

The vector can be any function of t, that you wish. Usually however, $$\vec{V}(t) = \frac{dt}{dx}$$, or in other words, dx(V(t)) is the jacobian. And later on dx^dy(V1,V2) will be the 2D jacobian, dx^dy^dz(V1,V2,V3) the 3D jacobian, etc.

And of course, usually, $$V(x) = dx/dx = 1$$

Last edited: Jul 26, 2006
10. Jul 27, 2006

Hurkyl

Staff Emeritus
Maybe this is sort of the problem. Infinitessimals simply aren't there in standard analysis -- not even in integrals or derivatives. I think, maybe, you are doing yourself a bit of harm thinking "Oh, it's just using infinitessimals after all."

The point of the formalism is to provide rigorously defined tools that can be used to rigorously achieve the same informal purposes we use infinitessimals for. Because they are intended for the same purposes, they will of course have similarities... but presumably, if you can modify your thinking to pass from the informal infinitessimal approach to more rigorous equivalents, you will be better off.

For example, whenever you think about "infinitessimals", try to mentally substitute the notion of "tangent vectors". So when you would normally think about an "infinitessimal neighborhood around P"... try thinking instead about the "tangent space at P".

Then, once you've done that, you no longer have to think about a cotangent vector as something that tells you how "big" an infinitessimal displacement is... you can now think of it as a linear functional on the tangent space.

In fact, I'm rather fond of using the notation P+e to denote the tangent vector e based at the point P. With this notation, we can actually write things like:

f(P+e) = f(P) + f'(P) e

and be perfectly rigorous. This is even better than infinitessimals -- that is an actual equality! If we were using infinitessimals, it is only approximate, and we have to wave our hands and argue that the error is insignificantly small.

Through axioms! You define d/dx to be an operator that:
(1) is a continuous operator
(2) satisfies (d/dx)(f+g) = df/dx + dg/dx
(3) satisfies (d/dx)(fg) = f dg/dx + df/dx g
(4) satisfies dx/dx = 1

and I think that's all you need.

11. Jul 27, 2006

nrqed

Yes, I am starting to realize this. I realize at some level that even before thinking in terms of differential forms, in plain old calculus I have to stop thinking in terms of infinitesimals. Your comments and Obsessive's comments are making me realize this and this is helpful.

I also realize that if I was only doing pure maths, that would be very easy for me to do. I would just think in terms of operators and their properties and so on. But the difficulty is in now trying to connect this to years of physics training. I am not closed minded to seeing things in a new light and I have a strong desire to move beyond the simple minded picture of maths I have from years of physics training. But the difficulty is in reexpressing everything I know and have worked with over the years in terms of this new language.

For example, just to mention an elementary example, almost at the high school level: given the expression for the E field produced by a point charge, what is the E field at a distance "d" from an infinitely long line of charge with linear charge desnsity $\lambda$?
The physicist's approach is to separate the line in tiny sections of "infinitesimal" length dl, write the expression for the E field produced by this small section ,making the approximation that all the charged in this section, $\lambda dl$ can be assumed to be located at the cebter (say), and sum the contributions from all the sections. What would I mean by "infinitesimal" in that context? Well, I imagine making the dl smaller an dsmaller until the sum converges to some value. In some sense, I realize that I always mean a "practical infinitesimal", so maybe that's why infinitesimals don't bother me.

But I am open to enlarging my views on this, if my views are incorrect at some level. But then my first question is obviously: what is the correct (i.e. mathematically sound) way to do the above calculation? How would a mathematician go about finding the E field of the infinite line of charge starting from the expression for a point charge? I know that the expression would end up being the same, but what would be the interpretation of a mathematican?

This is very interesting and I do like this way of thinking about things. And I would have no problem if I was focusing on maths only. But then I run into conceptual problems when I try to connect to my physics background, do you see what I mean?
Ok. I like this.

But then how would you show that dsin(x)/dx = cos(x)? It seems thatthe above axioms can only be applied to obtain explicit results for powers of x! Of course, maybe the answer is that one must apply the axioms to the Taylor expansion of sin(x). But how does one define the Taylor expansion of sin(x) ?? Usually, it's though derivatives, but here this leads to a vicious cycle.

Patrick

12. Jul 27, 2006

Hurkyl

Staff Emeritus
If you look carefully, you just said "I take the limit of Riemann sums", and we know that the limit of Riemann sums is an integral!

Another way to think about it is this.

You know the electrostatic field due to a point charge. You know if you add charges, you simply add the fields. The limit of this "operation" to an arbitrary charge distribution is simply a convolution -- i.e. an integral.

It all depends on how you define sin(x). Actually, when building everything up from scratch, I usually see people define sin(x) to be equal to the power series.

13. Jul 27, 2006

nrqed

I agree completely. But then infinitesimals are unavoidable (in the sense I described), no? I mean, the only way to do the calculation is to do it the way I described.

My problem is that if I go back to an integral like $x dx$ or *any* integral, I still think of it exactly in the same way: as breaking up into small pieces and taking the limit until the sum converges.

But then I am told "no, no, you shoudl not think of the dx there as being something small that is summed over, it is an operator (in standard analysis) or a differential form (in diff geometry).
So what is wrong in thinking of all integrals as being broken into a large number of small pieces and summing over? That has worked for all situations I have encountered so far, including doing calculations in thermodynamics, E&M, relativity, etc etc. And that works as well for cases for which the integrand is not exact so that the path matters. I just think of the differential (dx, dV, dq or whatever) as being a very small element of length, volume, charge, whatever. Small enough that the sum converges. And then summing over.

Then the question that someone like me obviously encounters when learning about differential forms is "why"? I mean, is it just a neat trick to unify vector calculus identities? Maybe, and that's fine. But the feeling I get is that even when I reach a point of actually carrying the integration, it is wrong to revert back to thinking of the dx (say) as a small (infinitesimal) element. But that's the only way I know of actually carrying an integral! Especially if the integrand is not exact!

Ok. Fair enough (so the definition of sin(x) as the opposite side over the hypothenuse in a right angle triangle because secondary in that point of view? Just curious). What about the derivative of ln(x)? How one would show that the derivative is 1/x?

Regards

Patrick

14. Jul 27, 2006

Hurkyl

Staff Emeritus
The thing to remember is that all of these things you do to characterize familiar operations like sines, logarithms, and derivatives work in both directions. For example, from the 4 axioms I provided, you can derive differential approximation (and Taylor series!), and then conclude that derivatives can be computed with limits.

If you're curious, if you defined the trig functions as power series, then you would probably wind up defining angle measure via the inverse trig functions, from which their geometric interpretation follows trivially.

You could even define two sine functions -- one geometrically, and one analytically -- and then eventually prove they are equal.

When's the last time you actually calculated an integral that way? I usually calculate it symbolically, and if that doesn't work I'll try to approximate the integrand with something I can calculate symbolically, and make sure the error is tolerable. And, of course, if I use a computer program it will decompose it into small but still finite regions.

(emphasis mine)

Because you lock yourself into that way of thinking. It keeps you from looking at a problem in a way that might be conceptually simpler. And it doesn't work for problems that don't have a density interpretation.

One good example is the exterior derivative. It's an obvious thing to do from a purely algebraic perspective. It has a wonderful geometric interpretation ala Stoke's theorem. But I'd be at a total loss if you asked me to describe it pointwise.

15. Jul 27, 2006

nrqed

Ok. But then, all the proofs physicists go through to obtain derivatives of dunctions using $$lim_{\Delta x \rightarrow 0} { f(x + \Delta x) - f(x) \over \Delta x}$$
become completely unnecessary?? A mathematician would look at those proofs and consider them completely unnecessary? Or plain wrong?
And if these proofs are unnecessary, do they work by "chance"? Or are they considered as complete and "convincing" to mathematicians as they are to physicists?

This is again the problem I always find myself facing. Mathematicians have a language which is different but at some level *must* be related to the physicist's approach. But different enough that it feels like there the physicist's approahc over here, and the mathematician approach over there, and it's really hard to get anyone even interested in bridging the gap. That's what I am hoping to find help with here.

Patrick

16. Jul 27, 2006

nrqed

That sounds interesting and I would love to see this. It's not obvious to me (I still don't quite see how to obtain that dln(x)/dx = 1/x starting from the 4 axioms). Again, I am not trying to be difficult, I am just saying that seeing a few of the usual results starting from the 4 axioms (such as the derivative of ln(x), a differential approximation of some function, one Taylor series) explicitly, it would clarify things greatly for me. I guess I learn a lot by seeing explicit examples.

Maybe, but one can also obtain the divergence theorem, Stokes theorem, etc, completely by simply breaking volumes or surfaces into tiny elements, writing derivatives as limits where higher powers of the "infinitesimals" are neglected, summing, etc. All those theorems then come out without any problem (that's the way they are derived in physicists E&M classes). Now, maybe there is something deeply wrong with this approach and that differential forms are the only really correct way to do it, but that's not completely clear to me.

Consider the integration over something which is not an exact form, now. Let's say that integral of y dx over some given path. I have no problem defining this by breaking the path over "infinitesimals" and adding the contributions over the path. This is in no way more difficult conceptually than any other integral. Bit how does one think about doing the integral using the language of differential forms if the integrand cannot be written as d(something)?? How does one get the answer?

Thanks!!

Patrick

17. Jul 28, 2006

Hurkyl

Staff Emeritus
I did miss something. Unfortunately, I'm more used to algebraic treatments.

Now that I've thought it over more, I realize what I've missed is that I should postulate the mean value theorem as an axiom. So...

' is an operator on a certain class of continuous functions satisfying:

(1) f' is continuous
(2) If a < b, there exists a c in (a, b) such that:
f(b) - f(a) = f'(c) (b - a)

(The synthetic treatments I've seen for integration use the mean value theorem for integrals as an axiom, that's why I think I need it here)

I suppose with these axioms it's somewhat more clear how to deduce the traditional definition of derivative.

Blah, now I'm going to be spending the next few days trying to figure out how to do it without postulating the MVT. I know the rules I mentioned before give you the derivatives of anything you can define algebraically... but I haven't yet figured out what continuity condidion I need to extend it to arbitrary differentiable functions.

That only works when you're working with things that can be broken into tiny elements. (e.g. you'll run into trouble with distributions, manifolds without metrics, and more abstract spaces of interest)

But that's not the point I was trying to make. We generally aren't interested in breaking things into tiny pieces so that we can sum them, and the like. That's just one means towards computing the thing in which we're really interested. And there are other means. For example, Eudoxus's method of exhaustion.

By fixating on the integrals as being sums of tiny pieces, it distracts you from focusing on the things that are really interesting, like what the integral actually computes!

IMHO, it's much more important to focus on what something does, than what it is. (Especially since there are so many different, yet equivalent, ways to define what it "is")

Last edited: Jul 28, 2006
18. Jul 28, 2006

Hurkyl

Staff Emeritus
Well, here's an example.

In the punctured plane (that is, there's a hole at the origin), there is a differential form w that measures angular distance about the origin. This is not an exact form.

So how would I think about integrating this form along a curve? Simple: I compute the angular displacement between the starting and ending points, and adjust it as necessary by counting how many times the curve loops around the origin. That's much simpler than trying to imagine breaking our curve up into little tiny pieces, and then adding up $(-y \, \Delta x + x \, \Delta y) / (x^2 + y^2)$ over all of them.

But, unless I was tasked with actually computing something, I wouldn't even put that much effort into thinking about the integral. All I care about is that "integrating this form gives me angular distance about the origin" and I wouldn't think about it any further.

Last edited: Jul 28, 2006
19. Jul 28, 2006

nrqed

Ok. That' san interesting example. But it has a simple interpretation because this happens to be $- d \theta$. I can see that removing the orgin makes it not exact.

But let's be more general. Let's say that instead, we consider integrating
$(-y^2 \, \Delta x + x \, \Delta y) / (x^2 + y^2)$ along, say, a straight line from a certain point to another point. How would you set about doing this integral, without "breaking" up the trajectory into "small" pieces?

Thank you for the feedback, btw! I really appreciate it.

Patrick

Last edited: Jul 29, 2006
20. Jul 29, 2006

nrqed

I hope this won't be driving you crazy

Iguess I learn more by specific examples, so just seeing how to get the derivative of sin(x) and ln(x) would clarify things greatly (for sin(x) are you still saying that the infinite expansion must be postulated?)

I can appreciate this but for now I just wanted to understand integration over differential forms. And since one can always "feed" vectors to differential forms to get numbers, I did not see any problem with this approach (of breaking up into tiny pieces).

Fair enough. But what if you have to integrate something as simple as
"y dx" over a specified path. How to proceed then without breaking into tiny pieces?

After our excahnges, I dug out a book I have: Advanced Calculus: A Differential Forms Approach" by Harold Edwards.

Maybe his presentation is not standard but the way he integrates over forms is by doing it exactly the way I would

In general, an integral is formed from an integrand which is a 1-form, 2-form or 3-form, and a domain of integration which is, respectively, an oriented curve, a surface or a solid. The integral is defined as the limit of approximating sums and an approximating sum is formed by taking a finely divided polygonal approximation to the domain of integration, "evaluating" the integrand on each small oriented polygon by choosing a point P in the vicinity of the polygon, by evaluating the functions A, B, etc at P to obtain a constant form and by evaluating the constant form on the polygon in the usual way"
(p.26, 1994 edition)

Here, when he talks about "evaluating" , he means feeding line element or triangles or cubes to the differential forms. And A, B, etc are the functions multiplying the basis forms, as in A dx ^ dy + B dx^ dz...

Again, I don't know if that's common thinking among people more mathematically sophisticated than me.

Now, I agree with what you said in a previous post that this is not the way one usually go about carrying out integrals! One use the fundamental theorem of calculus (FTC). I agree, but I see the FTC as a shortcut to get the answer when it is possible this way (I mean that it's not always possible to find a closed form expression for the antiderivative). Whereas the limit of sums remains the fundamenatl definition.

Then I read this in the book:
"At this point two questions arise: How can this definition of "integral" be made precise? How can integrals be evaluated in specific cases? It is difficult to decide which of these questions should be considered first. On the one hand, it is hard to comprehend a complicated abstraction such as "integral" without concrete numerical examples; but, on the other hand, it is hard to understand the numerical evaluation of an integral without having a precise definition of what the integral is. Yet, to consider both questions at the same time would confuse the distinction between the *definition* of integrals (a slimits of sums) and the *method* of *evaluating* integrals (using the FTC), This confusion is one of the greatest obstacles to understanding calculus and should be avoided at all cost"

(all emphasis are his).

Then, after discussing integrals as sums in the infinite limit, he gets to the FTC which he states as having two parts:

1. Let F(t) be a function for which the derivative F'(t) exists and is a continuous function for t in the interval [a,b]. Then
$$\int_a^b F'(t) \, dt = F(b) - F(a)$$.

Part II: Let f(t) be a continuous function on [a,b]. Then there exists a differentiable function F(t) on [a,b] such that f(t) = F'(t).

Part I says that in order to evaluate an integral it *suffices to write the integrand as a derivative* .

Part II says that theoretically this procedure always works , that is, theoretically any continuous integrand can be written as a derivative...Anyone who has been confronted with an integrand such as
$f(t) = { 1 \over {\sqrt{ 1 - k^2 sin^2 t}}}$ with or withiut a table of integrals knows how deceptive this statement is. In point of fact, II sasy little more that *the definite integral of a continous function over an interval converges*
.

Emphasis his....

A final quote:

Statement II is confusing to many students because of a misunderstanding about the word "function". When one thinks of a function one unconsciously imagines a simple rule such as F(t)=sin(sqrt(t)) which can be evaluated by simple computation, by consultation of a table or, at worse, by a manageable machine computation. The function defined by $F = \int f(t) dt$ need not be a standard function at all and a priori there is no reason to believe that it can be evaluated by any means other than by forming approximating sums and estimating the error as in the preceding chapter.

I know you know all that, but this first perfectly with my conception of doing inetgrals (using differential forms or not).

I am wondering if you have criticisms for what he is saying.

Regards

Patrick

Last edited: Jul 29, 2006