Musings on the physicists/mathematicians barrier

  • Thread starter Thread starter nrqed
  • Start date Start date
  • Tags Tags
    Barrier
Click For Summary
The discussion highlights the challenges faced by physicists transitioning to advanced mathematics, particularly in differential geometry and topology. It emphasizes the difficulty of reconciling prior physics knowledge with new mathematical concepts, often leading to frustration when mathematicians dismiss established physics principles. The conversation critiques the perceived immaturity of differential forms as a mathematical tool, arguing that they lack practical applications and clarity compared to vector calculus. Additionally, it points out the confusion stemming from inconsistent mathematical notation, which complicates communication between physicists and mathematicians. Ultimately, the thread underscores the need for a more collaborative approach to bridge the gap between these two disciplines.
  • #31
some comments:Differential forms, are a part of tensor calculus. To be precise differential forms are what is called "alternating" tensors. this is made extremely clear in spivaks little book calculus on manifolds, which is recommended to everyone.

as to the common usefulness of differential forms:

Two (three?) words: "de Rham cohomology" may suffice. this is explained in guillemin and pollack, or at a more advanced level in bott -tu.
 
Last edited:
Physics news on Phys.org
  • #32
i agree completely that it is frustrating, maybe hopeless, to try to learn mathematics that was created to express physical concepts, with no link to the physics that gave it life.

Most of us mathematicians do not write such books out of ignorance i guess.

but there is hardly any subjet more firmly settled in the mathematical and physical landscape than differential forms. Some of the most basic phenomena of mirror symmetry are expressed in the relations between Hodge numbers. i.e. dimensions of cohomology spaces whose elements are represented by harmonic differential forms.

as to distinguished users among physicists, think Ed Witten, or look at the book by John Archibald Wheeler, and others; and the great Raoul Bott, who wrote the book on differential forms with Loring Tu, was an engineer who did applied mathematics as well as topology.


since the use of differential forms is not restricted to physics it may be unfair to expect math books to explain the link, as that would seem the domain of physics books, or books on mathematical physics.

i have also been frustrated in trying to learn how manifolds and forms are used in physics, and to have been lectured at solely about mathematics rather than in how the math expresses the physics. But these were physicists doing the lecturing.

they seemed to take the physics for granted and assumed that what was interesting was learning the mathematical formalism. i wanted to know how it expressed physical phenomena and what those phenomena were.

i sat through a week of a summer course in quantum cohomology and mirror symmetry in this state once.

congratulations for trying to create a dialogue.
 
Last edited:
  • #33
as to the confusion (which may have been explained already here) in such notations as double integral of f(x,y)dxdy, and whetehr it does or does not equal the "same" double integral of f(x,y)dydx, you must always be aware of the definitions.

i.e. different people use this same notation for different things. for some it is a limit of sums of products of vakues of f, times areas of rectangles/ then it does not matter which way you write it, dxdy or dydx.

but for other people. using differential forms, it is a limit of sums of products of values of f times oriented areas of rectangles, measured by the differential form dxdy or dydx. one gives minus the other for oriented area.

this is actually an advantage as you will see if you lok at the formula for change of variables in double integrals in most books. i.e. those people who say that dxdy and dydx are the same, will tell you that when changing variables, you must use oriented changes of variables only, i.e. changes (u(x,y),v(x,y)) such that the jacobian determinant is positive.

this is unnecessary in using the forms version, as the orientation is built into the sign change from dxdy to dydx. i.e, you get a correct change of variables formula in all cases when using the forms version but not when using the old fashioned version we elarned in school.

so you might think iof forms that way: they are the same as the old way but they also include an enhancement to take care of all chjagnes of variables including those changing orientation. so they are more mature than the simpler les sophisticated version.
 
  • #34
I hadn't heard of one-forms until I took GR, and hadn't heard of two-forms (am assuming n-forms are defined now) until this forum. I think that physicists tend to focus on calculational ability rather than mathematical formalism.

Treating \frac{dy}{dx} as simple division will give correct results as long as the derivatives are total (and one has to be careful, can't do it with \frac{d^2 y}{dx^2}, that's why the 2 is between the d and the y, not after the y). If you have an expression like
\frac{dy}{dx} = x^2, you can multiply both sides by dx and integrate, because that operation is equivalent to applying \int dx to both sides and applying FTC. One can (in fact must) use the latter approach for higher order derivatives.

In regards to the \vec{D}= \epsilon \vec{E} issue, learning a bit of higher order physics can help. If \epsilon is a function of position(interface between glass and air for instance), it should be written \epsilon(\vec{r}) but it's still a 0-form. If, on the other hand, a medium is non-isotropic (crystals), it becomes a rank 2 tensor. This would make it a 2-form, or "dual" to a 2-form (right?).

I have a question about forms. They're linear maps, no? I was told that a one-form is a linear map from vectors to scalars. Would that make a 2-form a map from vectors (or one-forms) to vectors (or one-forms)? If that were the case I don't see quite why D would be a 2-form, and E a 1-form.

As for mathematicians v. physicists issue in general, I think it all depends on where you start. Physicists try to model physical reality, and use mathematics to do that. Being rigorous isn't necessary all the time, and often obscures understanding. Starting from first physical principals and often empirically derived laws, physicists try to make predictions. Mathematicians don't have empirically derived laws, only axioms. A physicist can always do an experiment to test his result. If they agree, must've made an even number of mistakes in the derivation, and the result is still good. A mathematician often can't test things by experiment.
 
Last edited:
  • #35
2forms are alternating bilinear maps on pairs of vectors. see david bachmans book here elsewhere on geometry of differentil forms.

such alternaing conventions make sure one gets zro for the area of a "rectangle" of height zero, i.e. one spanned by two dependent vectors.
 
  • #36
nrqed said:
This is an eye opener for me!

My formation as a physicist has given me the feeling that riemannian sums are the fundamental definition!

I couldn't find my old notes, but I believe this was the way I was introduced to the concept of definite integration. I wish I could better draw pictures, but http://img337.imageshack.us/img337/4536/fundamentalhi1.png .

Let the operation \int f(x) dx be that which finds the family of antiderivatives to f(x), i.e. \int f(x) dx = F(x) + C \Rightarrow \frac{dF(x)}{dx} = f(x), F(x) is the "principal" antiderivative and C is an arbitrary constant.

OK. We want to find the area under the curve of f(x) between the points a and b, with a<b. Denote the function that gives the area between the points a and an arbitrary point x, as A_a(x). Note x>a. We seek A_a(b) as our final answer. Note that as the area under the function at anyone point is zero, we automatically have A_a(a) = 0

OK, now to examine the function A_a(x) and an arbitrary point. In paticular we want to examine its derivative. Consider the value of the area at A_a(x). How will the area change as we change x. Let \Delta x be our change in x. Then the area between a and x + \Delta x is given by A_a(x+\Delta x). The difference between these is the shaded area on the graph \Delta A. Specifically A_a(x+\Delta x) - A_a(x) = \Delta A

Now, look at the area \Delta A. As \Delta x \rightarrow 0, we can approximate this area using the area of the trapezium formed by (x,0),(x+\Delta x,0),(x+\Delta x,f(x+\Delta x)),(x,f(x)). By the area of a trapezium formula, we obtain \Delta A \cong \frac{f(x+\Delta x) + f(x)}{2} ((x+\Delta x) - x) \cong \frac{f(x+\Delta x) + f(x)}{2} \Delta x.

So equating our representations for \Delta A, we have;
A_a(x+\Delta x) - A_a(x) \cong \frac{f(x+\Delta x) + f(x)}{2} \Delta x
Dividing by \Delta x
\frac{A_a(x+\Delta x) - A_a(x)}{\Delta x} \cong \frac{f(x+\Delta x) + f(x)}{2}

Now take the limit as \Delta x \rightarrow 0 to equate both sides.
\lim_{\Delta x \rightarrow 0}\frac{A_a(x+\Delta x) - A_a(x)}{\Delta x} = \lim_{\Delta x \rightarrow 0} \frac{f(x+\Delta x) + f(x)}{2}

We can see that the right hand side is the definition of \frac{d A_a(x)}{dx}

It can be seen that the limit of the average becomes;
\lim_{\Delta x \rightarrow 0} \frac{f(x+\Delta x) + f(x)}{2} = f(x)

Therefore we have that:
\frac{d A_a(x)}{dx} = f(x)

And that means, that A_a(x) must be an anti derivative of f(x), \int f(x) dx. i.e.

A_a(x) = F(x) + C

But which C? Well from above, we know that, A_a(a) = 0. So that means;

A_a(a) = F(a) + C
0 = F(a) + C
\Rightarrow C = - F(a)

So we have that the area under the curve f(x) between x and a is given by;
A_a(x) = F(x) - F(a)
Where F(x) is the "principal" antiderivative of f(x). In fact, F(x) can be any antiderivative as the constant differences will cancel. Thus we have that;
A_a(b) = F(b) - F(a)

We traditionally denote A_a(b) as \int_a^b f(x) dx to empahsise that
F(b) - F(a) = \int f(x) dx \vert_b - \int f(x) dx \vert_a. Where \vert_{d} stands for "evaluation at x=d".

Anyway that was how I learned that the area under a curve between a and b is \int_a^b f(x) dx. I only saw the riemannian sum method later, and was initially quite dubious of it. Hopefully this long winded post will be of some use to anyone who gets through it all.
 
Last edited by a moderator:
  • #37
The above is a pretty nice synopsis of how integration was thought of by many people pre-Riemann, although it should be noted that integration has always been associated with limits of sums (hence, the elongated "S" symbol, standing for "Sum," that Leibniz -- and everyone since -- used).

Riemann (and Cauchy) were worried about several aspects of this way of thinking about integration:

1. How does one actually defined the area underneath a given curve? For lines and circles, the area comes right from the Euclidean geometry, but how can one rigourously define area for other curves? If there is no such definition, then one can't even define the function A_a.

This is where the limit of sums of the areas of rectangles comes from. It was in the lore since Newton (heck, even Archimedes used a 3d version of this idea to find the volume formulae for some spatial objects), but Riemann is the one who formulated the definition of the Riemann sums and the limits of their areas rigourously.

2. How can one actually tell when a function has an antiderivative? For polynomials and other such nice functions it's obvious. But for most functions -- particularly, noncontinuous and/or nondifferentiable ones -- it's a bit of a tricky question.

This is where the Riemann sums come in handy. Using the Riemann sum definition of area and then proving FTC, one can show that any function that is Riemann integrable does in fact have an antiderivative.

3. Most importantly, Riemann was interested in expanding the current definition of integration so that one could rigourously define integration over a larger group of functions than was possible under the current state of calculus.
 
  • #38
i concur. what you have proved is roughly that: if there is an area function for f>0 such that the area between c and d, divided by d-c, is always between the max and min value of f, then the derivative of thata rea function is f.

but you must define the area function and show it has that property.

of course that property itself forces the definition. i.e. if the area is always squeezed between the areas of upper and lower ectangles, which is what the property says, then the only possible definition is the riemann definition.
 
  • #39
what in the world is going on with my browser here today? what i am seeing is nothing like what you are seeing.
 
  • #40
ObsessiveMathsFreak said:
I couldn't find my old notes, but I believe this was the way I was introduced to the concept of definite integration. I wish I could better draw pictures, but http://img337.imageshack.us/img337/4536/fundamentalhi1.png .

Let the operation \int f(x) dx be that which finds the family of antiderivatives to f(x), i.e. \int f(x) dx = F(x) + C \Rightarrow \frac{dF(x)}{dx} = f(x), F(x) is the "principal" antiderivative and C is an arbitrary constant.

OK. We want to find the area under the curve of f(x) between the points a and b, with a<b. Denote the function that gives the area between the points a and an arbitrary point x, as A_a(x).

Hi agaoin. Thanks for your input. I did read through it all and it is very beneficial to meto have this kind of discussion with mathematicians (as opposed to staying within the circle of non-mathematical physicists).

I guess the question is: what is the starting point one chooses for the definition of the integral. As you know, I am used to seeing it as being defined as a riemannian sum as a starting point (and then proving the interpretation of an area under the curve or proving the fundamental theorem of calculus starting from that).

You have a different starting point, but I am a bit confused in this post because first you define the integration as being an operator giving the antiderivative and then you seem to *define* it as the operator that gives the area under the curve. I know that one can show one from the other but I am not clear about what you see as being the true starting point.

I thought that the operation "integration gives the antiderivative" was your starting point.

My problem with this is that, it seems to me, it is less general than the definition as a riemannian sum. I mean, many integrals can be expressed as infinite sums that can be written down starting from the riemannian sum approach but for which for which there are no simple closed expression for the antiderivative. So if one definition (the riemannian sum) works all the time and the other not, I would think that the first would be used as the fundamental definition.

Of course, as others have pointed out, in *practice* one does not use the summation definition to calculate most integrals. I agree with this, but the fact that one usually uses antiderivative to evaluate integrals does not mean that it is necessarily a more fundamental definition.

The way I think about this is a bit similar to the rule concerning the differentiation of, say, x^n. I think of the definition of a derivative as being the usual limit as delta x ->0 of (f(x+ delta x) - f(x))/(delta x).

Now, of course, if I deiffentiate 40x^7 + 6 x^18 - x^31, I do NOT apply the limit definition, I use the usual trick for powers of x.
So when it comes to doing explicit calculations, the limit definition is almost never used. But still, it is the fundamental definition. The fact that the derivative of x^n is n x^(n-1) is just a consequence.

Similarly, the fact that the integral can be show to correspond to the antiderivative is something that I see *following* (in a simple way) from the definition in terms of a riemmann sum. So that in practice, I of course find the antiderivative when I evaluate simple (=doable in terms of elementary functions) integrals, but in my mind I keep thinking that it's something that can be proven starting from the riemannian sum definition and that it is a useful "shortcut" (like bringing down the exponent and decreasing it by 1 in the case of the derivative of x^n...) But I realize from this thread that I am thinking very differently from the way I do.


Now, what seems to me is that mathematicians prefer to *define* the integration as giving the antiderivative and then to see the riemannian sum as something secondary (and maybe not even necessary).

Hurkyl has even started to show me how *derivatives* could be defined in terms of axioms, such as the chain rule and linearity, etc) without introducing the definition as a limit.

My mental block with all this is twofold.

First, there are many things about differentiation and integration that are fairly easy to understand using the Riemannian sum approach or the limit approach (for derivatives) that are not that obvious without them (for example it's not clear to me how to get from purely "integration = finding antiderivative" to the area under the curve view, and many other things). Now, I am not saying that it's not possible to get all the results I know about proceeding that way, but it's not clear to me and it seems that maybe more and more axioms need to be added to cover everything?! (like in proving that the derivative of ln(x) is 1/x.. )



The second problem is that, considering for example integration, I simply do not see at all (even in principle) how to use the more formal approach of integration=finding antiderivative to even the simplest type of physical applications. For example, as I have mentioned, fidning the E field produced by an infinite line of charge, starting from the knowledge of the E field produced by a single point charge.
If someone could show me how to do this without *starting* from a riemannian sum, I would be grateful. However, it seems to me that it is impossible to do with without starting from a riemannian sum.
Ithink that anybody having done even introductory level calculus physics would agree that the riemannian sum (and the idea of very very small "pieces", which I have called infinitesimals and for which I have been ridiculed :smile:) are the only way to think in the context of any physical application.


I would be curious about how a mathematician would go about *setting up the integral* representing, say, the total mass of a sphere with some mass density \rho(r), say. How do mathematicians show how to do this calculation without starting from a Riemann sum and thinking in terms of "infinitesimal" volume element, small enough so that one can approximate the volume density in that element as constant (which is what I call an infinitesimal volume element ) and then summing over all the volume elements...i.e. doing a riemannian sum?? How do mathematicians do the calculation otherwise??


So in the end the question I have are:

A) Is it possible to simply define differentiation as the usual limit and integrals as Riemann sums? Is there any problem with that?

B) Then, it is just a matter of taste to define instead integration as an operation that gives the antiderivative? But then how does one define the antiderivative in the case of integrations which do not lead to expressions that can be written in closed form (without of course getting into circular reasoning)? Can someone show me a general procedure that would define the antiderivative without involving riemannian sum in such a case?

And can one also work out everything about derivatives without using the limit definition?

C) In actual (physical) applications, such as finding E field of continuous charge distributions, etc, is there any alternative to the riemannian sum approach??



Thanks again for th every stimulating exchanges...


Patrick
 
Last edited by a moderator:
  • #41
nrqed said:
Now, what seems to me is that mathematicians prefer to *define* the integration as giving the antiderivative and then to see the riemannian sum as something secondary (and maybe not even necessary).

Actually, this is exactly the opposite of the way most mathematicians see integration.

Although integration in high school and entry-level college is often introduced this way, it is not very rigorous (as I pointed out above). The easiest rigorous method is via Riemann sums, which virtually all math majors learn about rigourously in their first real analysis course. Later on, one can also explore Lebesgue integration, but that's another story.

The fact that integration acts as anti-differentiation is a consequence of the definition of definite integrals as limits of Riemann sums. This is exactly analogous to the situation with derivatives: one defines them using limits, then proves theorems about them such as the power rule, then often restricts oneself to the *proven* rules (rather than the direct definition) when computing derivatives in practice.

So, I would say that, in the case of integrals, mathematicians and physicists are not particularly different from each other in outlook.
 
  • #42
the point of my post 38 was that if you define integrals as antiderivatives then you must give some conditions under which antiderivatives exist. this is usually via riemaNN SUMS.
 
  • #43
Here's a geometric take on differentiation -- all a derivative is is the slope of a tangent line. So if you can define tangent lines, you can get derivatives.


One way to define a tangent line is through secant lines. Secant lines are easy -- given two distinct points P and Q on a curve, there's a unique line through them. That line is the secant line to your curve through P and Q.

If we take the limit as P and Q both approach some point R, then the secant line through P and Q might converge to some line. That line is nothing more than the tangent line at R.



There's another intuitive idea -- that of a "multiple point". A tangent line to a curve is nothing more than a line that intersects your curve multiple times at a single point. Unfortunately, I don't see at the moment a direct way to rigorously define a multiple point. Though in the purely algebraic context, a multiple point is simply a multiple root of the equation "line = curve".
 
Last edited:
  • #44
Doodle Bob said:
Actually, this is exactly the opposite of the way most mathematicians see integration.

Although integration in high school and entry-level college is often introduced this way, it is not very rigorous (as I pointed out above). The easiest rigorous method is via Riemann sums, which virtually all math majors learn about rigourously in their first real analysis course. Later on, one can also explore Lebesgue integration, but that's another story.

The fact that integration acts as anti-differentiation is a consequence of the definition of definite integrals as limits of Riemann sums. This is exactly analogous to the situation with derivatives: one defines them using limits, then proves theorems about them such as the power rule, then often restricts oneself to the *proven* rules (rather than the direct definition) when computing derivatives in practice.

So, I would say that, in the case of integrals, mathematicians and physicists are not particularly different from each other in outlook.


Ok. Good. That's pretty clear. And that corresponds *exactly* to the view I have always had of integration (defined as a Riemann sum, which can then be used to relate to antidifferentiation, which is then used as a tool to carry out integrals explicitly in most cases).

You have expressed my "philosophy" very clearly. I had been led to think, by reading several posts, that mathematicians considered the definition as Riemann sums secondary and even maybe superfluous, which confused me greatly! But I probably had misinterpreted, simply. Thanks for setting the record straight.

Mathwonk said:
the point of my post 38 was that if you define integrals as antiderivatives then you must give some conditions under which antiderivatives exist. this is usually via riemaNN SUMS.

Ok, that makes sense. So one ends being led back to Riemann sums anyway in order to formalize viewing integration as a way to obtain an antiderivative. That's good to hear.

I am used to think of the integration process as being defined in terms of Riemann sums and *then* to "uncover" that the result is, lo and behold, associated to finding antiderivatives (so the "duality" integration-differentiation comes out as a neat *consequence* of the definition of the integration process).
I had started to feel from thsi thread (and others) that maybe mathematicians view more the integration as being more fundamentally *defined* as an "antidifferentiation" process (which, in other words, would turn the Fundamental theorem of Calculus into an identity), with the "Riemann summation" point of view being a consequence only, not the fundamental starting point.

Thanks to both of you for your comments!

Patrick
 
  • #45
I know I haven't been clear on this, so let me try it again.


If someone said to me: "Develop everything rigorously from scratch", the first thing I would think of for definite integrals would be a limit Riemann sums. (Unless I went down the Lesbegue route, or decided to try and be more creative)


But if someone said to me: "apply definite integration to solve problems", Riemann sums would not commonly be something I think of.

The latter is the point I'm trying to make.
 
Last edited:
  • #46
here is a selection from the introduction to a book on tensors for science students written by professors of mechanical engineering and math.
i found this from the thread on free math books. the book seems very clear and connects the new point of view with the old.

[Bowen and Wang]

"In preparing this two volume work our intention is to present to Engineering and Science
students a modern introduction to vectors and tensors. Traditional courses on applied mathematics
have emphasized problem solving techniques rather than the systematic development of concepts.
As a result, it is possible for such courses to become terminal mathematics courses rather than
courses which equip the student to develop his or her understanding further.

As Engineering students our courses on vectors and tensors were taught in the traditional
way. We learned to identify vectors and tensors by formal transformation rules rather than by their
common mathematical structure. The subject seemed to consist of nothing but a collection of
mathematical manipulations of long equations decorated by a multitude of subscripts and
superscripts. Prior to our applying vector and tensor analysis to our research area of modern
continuum mechanics, we almost had to relearn the subject. Therefore, one of our objectives in
writing this book is to make available a modern introductory textbook suitable for the first in-depth
exposure to vectors and tensors. Because of our interest in applications, it is our hope that this
book will aid students in their efforts to use vectors and tensors in applied areas. "
 
  • #47
in practice, e.g. in diff eq, one usually encounters functions whose antiderivatives are completely unknown. thus one needs a procedure which will not only show they exist, but also gave a way to construct or approximate the antiderivative [e.g. of cos(x^2)]. one is again led back to riemann sums.
 
  • #48
When I've read traditional approaches you can read the whole text and again a functional knowledge. For example I found taht knowing that somethign is a tensor if it's components transform in a certain way still left me wondering
'what exactly is a tensor?'

Luckily a few better texts and reading the posts on these boards (esp. mathwonk's!) has made me confident in my knowledge of what exactly a tensor is (and the difference between the compooenst of a tensor, tensor fields etc) even if my knowledge of tensor calculus is still incomplete. Once you've got a good understanding of what a tesnor is it becomes ten times easier to advance your knowledge on the subject.
 
  • #49
Another slightly tangential thing I'd say is how many times do you see physics texts say that 'X applies locally' and how many times do physics texts say in more than a handwaving way what it means that 'X applies locally'?
 
  • #50
Physics is shortsightedly application driven & math is abstract past meaninglessness.

So mix them? No. It depends on the person. Judging by the number of approaches, I don't think it's possible to be all things to all people.
 
  • #51
Thrice said:
Physics is shortsightedly application driven & math is abstract past meaninglessness.
I'd like to see you justify both of those claims.
 
  • #52
Son Goku said:
I'd like to see you justify both of those claims.
Well it was a caricature. I'm just saying I believe the topics allow for many differences & there's no right approach that everyone should converge to. Even in math you'll find discrete vs analysis people or in physics there's theoretical & experimental types.
 
  • #53
son goku, i also like elementary hands on calculations to begin to understand what a concept means. that's how the subjec=ts began and hoq tgheir discoverers often found them. but after a while one wants to pass to using their proeprties both to understand and to calculTE WITH THEM.

fundamental groups for instance have a basic proeprty, their homotopy invariance. thi shows immediately that a mobius strp and a circle have the same fundamental group, so there is no reason to calculate it again for a mobius strip.as for a circle, the ebst calculation is the notice that the exponentil map is a contractible covering covering space. hence the fundamental group of the circle is essentilly the group of lifts of arcs based at a one point of the circle. such lifts are clasified by their endpoint, which must be an integer. hence the fundamental group is the integers.

similarly the fundamental group of a product is the rpoduct of the fundamental groups. so since the torus is a product of two circles, the fundamental group is a prodct of two copies of the integers.

or one could use the contractible covering map showing the torus is the quoteint space of the plane modulo the integr lattice points in the plane, hence that lattice is the fund group.

etc etc
 
  • #54
mathwonk said:
son goku, i also like elementary hands on calculations to begin to understand what a concept means. that's how the subjec=ts began and hoq tgheir discoverers often found them. but after a while one wants to pass to using their proeprties both to understand and to calculTE WITH THEM.
Interesting, it's probably due to my limited experience but most of the mathematician's at my university generally learn things from the definitions first, an ability I always found very impressive.

Although as you said, either way of doing it (learning by calculating first and then moving to definition or vice-versa) is just a way of moving on into the interesting stuff.

As mathematician what would you say, in general, separates the way mathematics is presented in theoretical physics to the way it is presented in maths?
 
  • #55
Son Goku said:
Interesting, it's probably due to my limited experience but most of the mathematician's at my university generally learn things from the definitions first, an ability I always found very impressive.
No one learns anything from a definition.

A mathematical definition is a thing austere and insurmountable. It's form comes only into focus from shelves above it, reached by winding and circuitous paths that loop around its sheer and unforgiving slopes. None can scale its glassy surface, no crack or foothold exists upon it. It is a cliff unmeant for climbing.

Do not accept ropes of rote let down by those on the definitions tip! To understand mathematics, one must muddy one's boots on the longer, less grandiose routes. For if you rely on dangling ropes to ascend this noble peak, then the time will come when your path leads you to a facade as yet unmastered, and no ropes will come. There you will stand awaiting one, surrounded by muddy but fruitful treks to the summit.
 
  • #56
Differential forms not mature?

Hi, OMF,

ObsessiveMathsFreak said:
My opinion, for what it's worth, is that differential forms is simply not a mature mathematical topic. Now it's rigourous, complete and solid, but it's not mature. It's like a discovery made by a research scientist that sits, majestic but alone, waiting for another physisist or engineer to turn it into something useful. Differential forms, as a tool, are not ready for general use in their current form.

Wow! That's quite an impassioned indictment. Did you not read Harley Flanders, Differential Forms, with Applications to the Physical Sciences?

I am quite confident that you are quite wrong about forms. Not only is the theory of differential forms highly developed as a mathematical theory, it is highly applicable and greatly increases conceptual and computational efficiency in many practical engineering and physics tasks. The elementary aspects of forms and their applications have been taught to undergraduate applied math students at leading universities with great success for many years. (At my undergraduate school, the terminal course for applied math majors was based entirely on differential forms; all engineering students were also required to take this course, as I recall.) I am a big fan of differential forms and feel they are easy to use to great effect in mathematical physics; see for example http://www.math.ucr.edu/home/baez/PUB/joy for my modest attempt to describe a few of the applications I myself use most often.

ObsessiveMathsFreak said:
The whole development of forms was likely meant to formalise concepts that were not entirely clear when using vector calculus alone.

Not really, according to Elie Cartan himself (who introduced the concept of a differential form and was their greatest champion in the first half of the 20th century), the main impetus included considerations like these:

1. the need for a suitable formalism to express his generalized Stokes theorem,

2. the nature desire to express a differential equation (or system of same) in a way which would be naturally diffeomorphism invariant (this is precisely the property which makes them so useful in electromagnetism).

ObsessiveMathsFreak said:
A one-form must be evaluated along lines, and a two-form must be evaluated over surfaces.

Does this reasoning appear anywhere in any differential form textbook? No.

This claim seems very contrary to my own reading experience.

ObsessiveMathsFreak said:
Not even is it mentioned that certain vector fields might be restricted to such evaluations. Once the physics is removed, there is little motivation for forms beyond Stoke's theorem,

Not true at all. I hardly know where to begin, but perhaps it suffices to mention just one counterexample: the well-known recipe of Wahlquist and Estabrook for attacking nonlinear systems of PDEs is based upon reformulating said system in terms of forms and then applying ideas from differential rings analogous to Gaussian reduction in linear algebra. I can hardly imagine anything more practical than a general approach which has been widely applied with great success upon specific PDEs.

http://www.google.com/advanced_search?q=Wahlquist+Estabrook&hl=en

ObsessiveMathsFreak said:
I don't think differential forms are really going to go places. I see their fate as being that of quaternions. Quaternions were origionally proposed as the foremost method representation in physics, but were eventually superceeded by the more applicable vector calculus. They are still used here and there, but nowhere near as much as vector calculus. Forms are likely to quickly go the same way upon the advent of a more applicable method.

I am sorry that you have apparently had such a miserable experience trying to learn how to compute with differential forms! I hope you will try again with a fresh outlook, say with a book like the one I cited above.

Chris Hillman
 
Last edited by a moderator:
  • #57
I've just come back to the forum after almost a year away and found this thread stimulating. The following quotes show why even a mechanical engineer is interested in differential forms:

'The important concept of the Lie derivative occurs throughout elasticity theory in computations such as stress rates. Nowadays such things are well-known to many workers in elasticity but it was not so long ago that the Lie derivative was first recognized to be relevant to elasticity (two early references are Kondo [1955] and Guo Zhong-Heng [1963]). Marsden and Hughes, 1983, Mathematical Foundations of Elasticity.'

'Define the strain tensor to be ½ of the Lie derivative of the metric with respect to the deformation'. Mike Stone, 2003, Illinois. http://w3.physics.uiuc.edu/~m-stone5/mmb/notes/bmaster.pdf

'…objective stress rates can be derived in terms of the Lie derivative of the Cauchy stress…' Bonet and Wood, 1997, Nonlinear continuum mechanics for finite element analysis.

'The concept of the Lie time derivatives occurs throughout constitutive theories in computing stress rates.' Holzapfel, 2000, Nonlinear solid mechanics.

'Cartan’s calculus of p-forms is slowly supplanting traditional vector calculus, much as Willard Gibbs’ vector calculus supplanted the tedious component-by-component formulae you find in Maxwell’s Treatise on Electricity and Magnetism' – Mike Stone again.

'The objective of this paper is to present…the benefits of using differential geometry (DG) instead of the classical vector analysis (VA) for the finite element (FE) modelling of a continuous medium (CM).' Henrotte and Hameyer, Leuven.

'The fundamental significance of the vector derivative is revealed by Stokes’ theorem. Incidentally, I think the only virtue of attaching Stokes’ name to the theory is brevity and custom. His only role in originating the theorem was setting it as a problem in a Cambridge exam after learning about it in a letter from Kelvin. He may, however, have been the first person to demonstrate that he did not fully understand the theorem in a published article: where he made the blunder of assuming that the double cross product v  (  v) vanishes for any vector-valued function v = v(x) .' Hestenes, 1993, Differential Forms in Geometric Calculus. http://modelingnts.la.asu.edu/pdf/DIF_FORM.pdf

Several people on this thread have mentioned Flanders’ Differential Forms with Applications to the Physical Sciences (Dover 1989 ISBN 0486661695) and Flanders himself notes that:

'There is generally a time lag of some fifty years between mathematical theories and their applications…(exterior calculus) has greatly contributed to the rebirth of differential geometry…(and) physicists are beginning to realize its usefulness; perhaps it will soon make its way into engineering.'

However, the formation of engineers is different from that of mathematicians and perhaps even physicists and their aim is usually to get a numerical answer to a _design_ problem as quickly as possible. For example, 'stress' first appears on p.27 of Ashby and Jones’ Engineering Materials, in the context of simple uniaxial structures, but p.617 of Frankel’s Geometry of Physics, in the context of a general continuum. Engineering examples, taken from fluid mechanics and stress analysis rather than relativity or quantum mechanics, usually start with 'Calculate…' rather than 'Prove…'. So many otherwise-excellent books, including Flanders, aren’t suitable for most engineering students. However, what I'm learning here is of great help in trying to put together lecture notes for engineers. So I'd like to add my thanks to those here who've contributed to my limited understanding in this area.

Ron Thomson,
Glasgow.
 
Last edited by a moderator:
  • #58
Hi, Ron,

rdt2 said:
Several people on this thread have mentioned Flanders’ Differential Forms with Applications to the Physical Sciences (Dover 1989 ISBN 0486661695) and Flanders himself notes that:

'There is generally a time lag of some fifty years between mathematical theories and their applications…(exterior calculus) has greatly contributed to the rebirth of differential geometry…(and) physicists are beginning to realize its usefulness; perhaps it will soon make its way into engineering.'

Which he wrote in the 1960s, right? Referring to Cartan's work during the 1920's and 1930's? Indeed, by the 1980s, leading engineering schools such as Cornell were restructuring their undergraduate curricula to expose their students to differential forms.

rdt2 said:
However, the formation of engineers is different from that of mathematicians and perhaps even physicists and their aim is usually to get a numerical answer to a _design_ problem as quickly as possible. For example, 'stress' first appears on p.27 of Ashby and Jones’ Engineering Materials, in the context of simple uniaxial structures, but p.617 of Frankel’s Geometry of Physics, in the context of a general continuum. Engineering examples, taken from fluid mechanics and stress analysis rather than relativity or quantum mechanics, usually start with 'Calculate…' rather than 'Prove…'. So many otherwise-excellent books, including Flanders, aren’t suitable for most engineering students. However, what I'm learning here is of great help in trying to put together lecture notes for engineers. So I'd like to add my thanks to those here who've contributed to my limited understanding in this area.

Interesting. I entirely agree with you about the need to emphasize computational techniques, adding the need to offer plenty of simple but nontrivial examples. I mentioned Flanders because of the books I've seen (yeah, mostly in math libraries, not engineering libraries!), it comes closest to this spirit. In his introduction, he actually makes the same complaint: most students want to see some interesting applications presented in detail more than they want a lengthy exposition of "dry" theory.

In 1999, about the time I wrote the "Joy of Forms" stuff I linked to above, I actually was briefly involved in trying to teach differential geometry in general and forms in particular to graduate engineering students, so "Joy" is no doubt based in part upon that experience. This project resulted in disaster, in great part (I think) because I was directed to plunge in without having prepared a curriculum in advance and without knowing anything about the background of my students (this is certainly not a procedure which I advocated at the time, nor one which I would ever advise anyone else to adopt under any circumstances!).

Despite this failure, I remain entirely convinced that the world would be a much better place if engineering schools were more successful at teaching their students more sophisticated mathematics, [ITALICS]as tools for practical daily use in their engineering work.[/ITALICS] Certainly exterior calculus and Groebner basis methods would top the list, but I'd also add combinatorics/graph theory, perturbation theory, and symmetry analysis of PDEs/ODEs. So I hope you perservere with your lecture notes.

Chris Hillman
 
  • #59
Chris Hillman said:
Wow! That's quite an impassioned indictment. Did you not read Harley Flanders, Differential Forms, with Applications to the Physical Sciences?
I've read a lot of books on differential forms. Not that one, but still many others. Many of which purport to have applications to physical sciences, but usually just throw down the differential forms version of Maxwell's equations by diktat with little or nothing in the way of semantics. Worked examples are few, probably for the reason that the worked out question would be longer than the route taken by regular vector calculus.

Chris Hillman said:
Not really, according to Elie Cartan himself (who introduced the concept of a differential form and was their greatest champion in the first half of the 20th century), the main impetus included considerations like these:

1. the need for a suitable formalism to express his generalized Stokes theorem,

2. the nature desire to express a differential equation (or system of same) in a way which would be naturally diffeomorphism invariant (this is precisely the property which makes them so useful in electromagnetism).

I'm skeptical. I feel the main impetus for differential forms was to formalise something that was never really valid in the first place, namely concepts like; df or equations like
df = \frac{\partial f}{\partial x} dx + \frac{\partial f}{\partial y} dy
instead of the actual equation
\frac{df}{dt} = \frac{\partial f}{\partial x} \frac{dx}{dt} + \frac{\partial f}{\partial y} \frac{dy}{dt}
This was always a precarious point of view, and in my own view the theory of forms does not legitimise the concept. Even Spikav acknowledges that there is some debate in Calculus on Manifolds at the end of Chapter 2;
Calculus on Manifolds said:
It is a touchy question whether or not these modern definitions represent a real improvment over classical formalism; this the reader must decide for himself.
I have decided for myself. I don't approve of differential forms. At least, not as a replacement or improvement for vector calculus. That's just my own opinion, but I would ask others to consider this point of view before imposing forms arbitrarily on undergraduate courses.


Chris Hillman said:
ObsessiveMathsFreak said:
A one-form must be evaluated along lines, and a two-form must be evaluated over surfaces.

Does this reasoning appear anywhere in any differential form textbook? No.
This claim seems very contrary to my own reading experience.
That is what is technically referred to as a contextomy. I will simply refer back to the entireity of the original post.

Chris Hillman said:
I hardly know where to begin, but perhaps it suffices to mention just one counterexample: the well-known recipe of Wahlquist and Estabrook for attacking nonlinear systems of PDEs is based upon reformulating said system in terms of forms and then applying ideas from differential rings analogous to Gaussian reduction in linear algebra. I can hardly imagine anything more practical than a general approach which has been widely applied with great success upon specific PDEs.
All very well, but this discussion is in the context of differential forms being a replacement for vector calculus for ordinary physicists and engineers. As per my original point, I believe froms to be unsuited to this task. Whether by design or immaturity, they are not a suitable topic of study for most physicists involved in the study of either electromagnetism and especially fluid dymanics. They may, like other advanced mathematical topics, be of use in describing new theories or methods, but this thread is about their promotion for more basic studies, as per nrqed's initial post.

If I remember correctly, nrqed's inital post was in the context of several other threads on the topic of differential forms and possibly topology, where the supposed benefits of forms were being lauded to nrqed who, quite rightly, simply didn't see the benefit in the frankly massive amount of formalism required to study these topics. He's absolutely right. Topology in paticular is now a disaster area for the newcomer. 100+ years of invesigations, disproofs, counter examples, theorems and revisions have lead to the axioms and definitions of topology being completely unparsable.

A great many topology books offer nothing but syntax with no sematics at all. Differential forms texts fare little better. To a good physicist, sematics is everything, and hence the subject will appear to the great majority of them to be devoid of use. That's actually a problem with a lot of mathematics, and modern mathematics in paticular. Syntax is presented, but sematics is frequently absent.
 
  • #60
ObsessiveMathsFreak said:
I've read a lot of books on differential forms. Not that one, but still many others. Many of which purport to have applications to physical sciences, but usually just throw down the differential forms version of Maxwell's equations by diktat with little or nothing in the way of semantics. Worked examples are few, probably for the reason that the worked out question would be longer than the route taken by regular vector calculus.

My point exactly. A couple of authors try to give examples from mechanics but they always appear very contrived - suggesting that forms may be fundamentally unsuitable in some areas. If you want to read Marsden and Hughes 'Mathematical Foundations of Elasticity' or suchlike, then knowledge of forms is required. The question is, how many engineers and physicists want to read Marsden and Hughes.

I have decided for myself. I don't approve of differential forms. At least, not as a replacement or improvement for vector calculus. That's just my own opinion, but I would ask others to consider this point of view before imposing forms arbitrarily on undergraduate courses.

I'm less certain. I want to expose students to forms as a complement rather than a replacement for vector calculus. They'll judge in later life whether they're useful or whether, like most of their lecture notes, they can be consigned to the little round filing cabinet.

All very well, but this discussion is in the context of differential forms being a replacement for vector calculus for ordinary physicists and engineers. As per my original point, I believe froms to be unsuited to this task. Whether by design or immaturity, they are not a suitable topic of study for most physicists involved in the study of either electromagnetism and especially fluid dymanics.

Oddly enough, fluid dynamics was one of the areas where I thought that differential forms might have most application. I'm less sure about stress analysis, where the tensors are all symmetric.

Ron.
 

Similar threads

  • · Replies 4 ·
Replies
4
Views
2K
  • · Replies 43 ·
2
Replies
43
Views
7K
  • · Replies 4 ·
Replies
4
Views
2K
  • · Replies 4 ·
Replies
4
Views
3K
  • · Replies 209 ·
7
Replies
209
Views
17K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 13 ·
Replies
13
Views
2K
  • · Replies 71 ·
3
Replies
71
Views
2K
  • · Replies 34 ·
2
Replies
34
Views
8K
  • · Replies 23 ·
Replies
23
Views
6K