
#1
Jul2506, 05:12 PM

Sci Advisor
HW Helper
P: 2,886

After having spent some time trying to learn differential geometry and differential topology (my background is in physics phenomelogy) I can`t help making the following observation.
I think it is harder to learn the maths starting from a background in physics than learning the math from scratch (i.e. being formed as a mathematician. And the reason is that in *addition* to learn the math concepts, someone with my background feels the need to make the connection with everything he/she has learned before. That's a normal thing to do. If the maths are so powerful and more general, everything that was known before should be ''expressible'' in the language of this new and more powerful formalism. And this is when one hits almost a brick wall. Because a common reaction from the more mathemically inclined and knowledgeable people is to reject offhand everything the physicist has learned (and has used to make correct calculations!!) as being rubbish and almost infantile. But that just creates frustration. Because the physicist has done thousands of calculations with the less sophisticated concepts so it`s not possible to scratch everything as being wrong and start with a totally independent formalsim an dnever make the connection. That`s the main problem, there seems to be almsot some contempt from many (surely not all) people more well versed in pure maths toward simple physics. And yet, it feels to me that mathematicians should be very interested in bridging the gap between the pure an dmore abstract aspects of maths and physics calculations. I don`t mind at all realizing that I get something correct by luck because I am doing something that works only a sa special case, for example. That``s the kind of thing that I *actually* want to see happening when learning more advanced maths so that I can see that I was limited to special cases and I can see how the maths allows me to go further. But if I am told flatly that everything I have used before is plain wrong, this is hard to understand and creates a huge barrier in understanding a new mathematical formalism which seems then completely divorced from any actual practical calculations. The exmaple that comes to mind first is the physicist view of infinitesimals. I am running out of time on a public terminal but will write more what I mean in later post, if this one does not get pulled . I better run for cover 



#2
Jul2506, 07:21 PM

P: 406

I have studied the sum and entirety of differential forms, and have thus far found little of use in them. The generalised Stoke's theorem was nice, but only just about worth the effort.
My opinion, for what it's worth, is that differential forms is simply not a mature mathematical topic. Now it's rigourous, complete and solid, but it's not mature. It's like a discovery made by a research scientist that sits, majestic but alone, waiting for another physisist or engineer to turn it into something useful. Differential forms, as a tool, are not ready for general use in their current form. There's not a lot that can save the topic from obscurity, given its current formulation. Divorced from physics, the study of forms becomes an exercise in fairly pointless abstraction. The whole development of forms was likely meant to formalise concepts that were not entirely clear when using vector calculus alone. Let me explain. The units of the electric field E, are in Volts per metre, V/m. The units of electric flux, D, in columbs per metere squared, C/m^2. E is measured along lengths, lines, paths, etc. D is measured across areas, surfaces, sheets, etc. Using vector calculus with the definition [tex]\mathbf{D}=\epsilon \mathbf{E}[/tex], it's not clear why one should be integrated along lines and the other over surfaces(unless your a sharp physisist). However, defining E as a oneform, and D as a twoform, makes this explicit. A oneform must be evaluated along lines, and a twoform must be evaluated over surfaces. Does this reasoning appear anywhere in any differential form textbook? No. Not even is it mentioned that certain vector fields might be restricted to such evaluations. Once the physics is removed, there is little motivation for forms beyond Stoke's theorem, which could probably be proved by other methids anyway. There is in the main, a derth of examples, calculations, reasoning and applications, beyond the rather dire presentations of the Faraday, Maxwell and four current. All that effort to reduce Maxwell's equations from five to three, is frankly embarrassing. In short, the subject is not mature. Certainly not as mature as tensor analysis, and in no possible way as mature as vector calculus. It's lack of use supports this conclusion. Engineers, physicsts, and indeed mathematicians, cannot be expected to use a method that is not yet ready to be used. There is no real justification for learning , or applying this method when the problem can be solved more expiediently and more clearly, using tensor or vector calculus. The primary problem is the notation. It just doesn't work. Trying to pass off canonical forms as a replacement for variables of integration simply is not tenable, and proponents do not help their argument by making fast and loose conversion between the two, totally unsupported by any formalism. The classic hole the notation digs for itself is the following: [tex]\iint f(x,y) dxdy = \iint f(x,y)dydx[/tex] [tex]\iint f(x,y) dx\wedge dy =  \iint f(x,y)dy\wedge dx[/tex] And the whole supposed isomorphism breaks down. This is not good mathematics. I don't think differential forms are really going to go places. I see their fate as being that of quaternions. Quaternions were origionally proposed as the foremost method representation in physics, but were eventually superceeded by the more applicable vector calculus. They are still used here and there, but nowhere near as much as vector calculus. Forms are likely to quickly go the same way upon the advent of a more applicable method. 



#3
Jul2606, 11:20 AM

P: 406

The topics you mention are relatively esoteric, and highly mathematical. The purpose of my post was to emphasise that differential forms have not found their way into the applied mainstream. Electromagnetics, fluid dynamics, etc, are all still dominated by vector calculus. As nrqed mentioned, the expression of physical problems through differential forms is simply not done to any great degree.
As a mathematical tool forms are not as usable as other methods. There are many pitfalls and potential sources of confusion embedded in the notation and framework. Again, the reculance of the applied communities to use the method is a testament to its immaturity. We may have different definitions of maturity here, but my own is that the method must be ready for practial use. I think the trouble stems from the treatment of forms as an integrand and a variable of integration when it is quite clear that they are not. There seems to be a lot of confusion about this point amoung the community which again can be traced back to notation. The notation is confused and relies upon the user selecting, sometimes by chance, the correct relationship between canonical forms dx and variables of integration dx. This a real mess, and isn't ready for mainstream application. 



#4
Jul2606, 12:38 PM

Math
Emeritus
Sci Advisor
Thanks
PF Gold
P: 38,898

Musings on the physicists/mathematicians barrier
Can someone explain to me the MATHEMATICAL content of this? If not, I will delete the thread.




#5
Jul2606, 03:10 PM

Sci Advisor
HW Helper
P: 2,886

Well, I was going to ask to connect with physics. For example, people say that a oneform is something you integrate over a line. And that a two form is something that one integrates over a surface. But things are not so simple!! In E&M, for example, one encounters the integrals of the E field over a line ([itex] \int {\vec E} \cdot {\vec dl} [/itex]) in Faraday's law but one also encounters the surface integral [itex] \int {\vec E} \cdot {\vec A} [/itex] in Gauss' law. And the same situation appears with the B field. Now, I realize that using Hodge's dual, one can go from forms of different degrees, etc. But usually math books will say that the E field is really a one form and that the B field is a two form, without explaining why. This is one type of problem that I was alluding to. Another one is the use of infinitesimals. It seems to be the consensus that the concept of infnitesimals is a completely useless one and that everything should be thought as differential forms. (I am still wondering about a comment in the online book by Bachman where he says that not all integrals are over differential forms, btw) Consider the expression [itex] df = \partial_x f \, dx + \partial_y f\, dy + \partial_z f \, dz[/itex]. The view is usually that this makes only sense as a relation for differential forms. Of course, the way a physicist thinks of this is simply as expanding [itex] f(x+dx, y+dy, z+dz)  f(x,y,z) [/itex] to first order in the "small quantities" dx, dy and dz. I still don't understand what is wrong with this point of view. At first it might seem that differential geometry has for goal to eliminate completely the concept of "infininitesimal", but of course they reappear when defining integrals anyway, as Obsessive pointed out. Not only that, but it seems to me that the concept of infnitesimals is still all over the place, as part of the derivatives. For example, what does one mean by [itex] \partial_x f[/itex]? If not the limit [tex] lim_{\Delta x \rightarrow 0} { f(x + \Delta x)  f(x) \over \Delta x}[/tex] ? It is understood that delta x is taken small enough that this expression converges to some value. So why can't one think of [itex] f(x+dx, y+dy, z+dz)  f(x,y,z) [/itex] in the following way : compute [itex] f(x+\Delta x, y+ \Delta y, z+\Delta z)  f(x,y,z) [/itex] and take the delta smaller and smaller until the dependence on them is linear. *That* is my definition of infinitesimals. But I know that the "small Delta x" limit in the partial derivatives is well accepted but it is rejected as being totally wrong for something like df. Anyway, that's the kind of questions I wanted to discuss but I realize that it is not welcome here. Physicists can't understand maths, right?!?! what I was trying to point out in my first post was that the difficulty is NOT mainly in understanding the maths. I can sit down with a math book and just follow the definitions and learn it as a completely new field. The difficulty, for a physicist, comes when trying to connect with one's previous knowledge. But, as expected, this is deemed irrelevant and not of much worth here. So go ahead, erase the thread. regards Patrick 



#6
Jul2606, 04:56 PM

P: 212

I think the barrier between physicists and mathematicians is more of a language barrier than anything else.
One might think, hey, don't they both speak the language of mathematics, the language of nature? (Some of you may already know what I think of that). Mathematics is a consistent formal system, so it must be different from the language used to communicate it, because that language is inconsistent. Notation plays a large role in communicating mathematics. The rules of mathematical notation are inconsistent, not just between groups of people, but within groups of people between different topics in mathematics (even if they may be consistent within topics). For example, tensor analysis uses superscript to distinguish different coordinates, but algebra ordinarily uses the subscript to distinguish different coordinates and superscript to denote exponents. The notation of tensor analysis may be consistent within tensor analysis, but not with the notational conventions of other mathematical topics. Within the topic of tensors, mathematicians and physicists adopt differing conventions as well. Einstein, who we could say was initially much more physicist than mathematician, adopted the summation convention, or the omission of summation signs in favor of an assumption regarding the positions of a letter in both superscript and subscript. This convention allows the physicist to refer specifically to a coordinate system, whereas the mathematician's notation is independent of a coordinate system. Penrose believes this supposed conflict between mathematicians and physicists are resolved by the convention known as the abstractindex notation (and that the conflicts of abstractindex notation are resolved by diagrammatic notation). He talks about all of this in Chapter 12 of "Road to Reality." I remember a scene from "The Mechanical Universe" videos where Goodstein said that, while struggling with GR, Einstein said that he had a newfound appreciation for mathematicians and what they do. Einstein had to account for all the rules and nagging little exceptions to the rules in order to make everything consistent. Goodstein used the opportunity to say that, although physicists help us understand the universe, mathematicians are the "guardians of purity of thought." So, when you feel you've hit a brick wall, think of it as learning the language of the guardians. 



#7
Jul2606, 05:08 PM

P: 406

This might be consider a matter of extreme pedantics, paticularly when the fields and fluxes typically differ only by constants [tex]\epsilon[/tex] and [tex]\mu[/tex]. But sometimes you need to be pedantic. In my case, this is useful as I am working with materials in which the permeability an d permittivity are not constant. Your milage may vary. However, your view is not entirely wrong, as when it comes down to the final solution of many physical problems, numerical estimates of integration and differenciation are used, and dx and dy do become approximated by [tex] \Delta x[/tex] and [tex]\Delta y[/tex]. As to the point of view that every integration should be throught of as a differential form, or taken over differential forms; this is clearly nonsense. Differential forms are ultimately reduced to integral equations once they are applied to specific manifolds, i.e curves or surfaces, etc depending on the form. They are no more a replacement for integration than integration is a replacement for addition. Variables of integration "dx" are operators applied to integrands, and in fact the integrands in this case are differential forms. The full equation is in fact; [tex]\int f(x) dx(\vec{V}(x)) dx[/tex] Here the first dx is a form, and the second is a variable of integration. this is slightly clearer in the following. [tex]\int f(t) dx(\vec{V}(t)) dt[/tex] Here the variable of integration "x" has been replaced with a "t" Forms are operators on vectors. Variables of integration are operators on integrands. The two are not the same, and the only reason people are lead to belive so is due to poor notation. 



#8
Jul2606, 05:59 PM

Sci Advisor
HW Helper
P: 2,886

So from this point of view, one should think of E and D as being simply proportional to each other, there is truly a deep difference. To do E&M on a curved manifold, for example, the simple proportionality relation that physicists are used to would break down then? Or could one see this even in flat manifold but by going to some arbitrary curvilinear coordinate system? (I kow this woul be answered by looking at how the Hodge dual depends on a change of coordinate system). *This* would be the kind of insight that would make the differential form approach to E&M much more interesting! If one defines d/dx as an operator, how does one find how it acts on anything? And if the only way to find an explicit result is to go through the limit definition, then isn't this tantamount to say that the definition of the operator *is* the limit? Thank you very much for your comments. They are very appreciated. Regards Patrick 



#9
Jul2606, 06:57 PM

P: 406

However, in certain materials, D is not linearly proprotional to E, and may not in fact have the same direction. And of course, if the electric permittivity is not constant, for example if the range of your problem encompassed different materials, then the proportiality constant would not be strictly correct either. In any case, the flux must only be evaluated through surfaces, and the field only along curves. You can get away with this using vector calculus if you are very careful, or if it's not vital to the problem, but differential forms make this more explicit. D is also known as the polarization density and B as the magnetic flux density if that's any help. These are densities per unit area, and so must be "summed" or integrated over areas to get the overall flux through that area. If you go back an examine the SI units of each of the quantities, E, D, H, B, [tex]\rho[/tex], J, etc, you will see which are zero, one,two and three forms, simply by noting which are expressed in metres, metres squared, metres cubed and of course metres^0(no metres in the units). [tex]\frac{d}{dx}(f(x)) = lim_{\Delta x \rightarrow 0}\frac{ f(x+ \Delta x)  f(x)}{\Delta x}[/tex] But please remember that the dx in d/dx is not at all the same thing as the dx in [tex]\int dx[/tex]. Of course, when people work with differntial equations such as dy/dx = g(x) becoming [tex]\int dy = \int g(x) dx[/tex] often the dx is treated like a variable, and appears to be the "same thing", but in reality the two perform totally different operations. This distinction is often hidden or unstated, but for example, you would never do the following: ln(dy/dx) = ln(dy)  ln(dx). I think you would agree instinctively that this is somehow wrong. Another example might be that [tex]\frac{d^2 y}{dx^2} = g(x) [/tex] and [tex](\frac{dy}{dx})^2 = g(x)[/tex] are two very different equations. And of course, usually, [tex]V(x) = dx/dx = 1[/tex] 



#10
Jul2706, 12:28 AM

Emeritus
Sci Advisor
PF Gold
P: 16,101

The point of the formalism is to provide rigorously defined tools that can be used to rigorously achieve the same informal purposes we use infinitessimals for. Because they are intended for the same purposes, they will of course have similarities... but presumably, if you can modify your thinking to pass from the informal infinitessimal approach to more rigorous equivalents, you will be better off. For example, whenever you think about "infinitessimals", try to mentally substitute the notion of "tangent vectors". So when you would normally think about an "infinitessimal neighborhood around P"... try thinking instead about the "tangent space at P". Then, once you've done that, you no longer have to think about a cotangent vector as something that tells you how "big" an infinitessimal displacement is... you can now think of it as a linear functional on the tangent space. In fact, I'm rather fond of using the notation P+e to denote the tangent vector e based at the point P. With this notation, we can actually write things like: f(P+e) = f(P) + f'(P) e and be perfectly rigorous. This is even better than infinitessimals  that is an actual equality! If we were using infinitessimals, it is only approximate, and we have to wave our hands and argue that the error is insignificantly small. (1) is a continuous operator (2) satisfies (d/dx)(f+g) = df/dx + dg/dx (3) satisfies (d/dx)(fg) = f dg/dx + df/dx g (4) satisfies dx/dx = 1 and I think that's all you need. 



#11
Jul2706, 12:19 PM

Sci Advisor
HW Helper
P: 2,886

I also realize that if I was only doing pure maths, that would be very easy for me to do. I would just think in terms of operators and their properties and so on. But the difficulty is in now trying to connect this to years of physics training. I am not closed minded to seeing things in a new light and I have a strong desire to move beyond the simple minded picture of maths I have from years of physics training. But the difficulty is in reexpressing everything I know and have worked with over the years in terms of this new language. For example, just to mention an elementary example, almost at the high school level: given the expression for the E field produced by a point charge, what is the E field at a distance "d" from an infinitely long line of charge with linear charge desnsity [itex] \lambda [/itex]? The physicist's approach is to separate the line in tiny sections of "infinitesimal" length dl, write the expression for the E field produced by this small section ,making the approximation that all the charged in this section, [itex] \lambda dl [/itex] can be assumed to be located at the cebter (say), and sum the contributions from all the sections. What would I mean by "infinitesimal" in that context? Well, I imagine making the dl smaller an dsmaller until the sum converges to some value. In some sense, I realize that I always mean a "practical infinitesimal", so maybe that's why infinitesimals don't bother me. But I am open to enlarging my views on this, if my views are incorrect at some level. But then my first question is obviously: what is the correct (i.e. mathematically sound) way to do the above calculation? How would a mathematician go about finding the E field of the infinite line of charge starting from the expression for a point charge? I know that the expression would end up being the same, but what would be the interpretation of a mathematican? But then how would you show that dsin(x)/dx = cos(x)? It seems thatthe above axioms can only be applied to obtain explicit results for powers of x! Of course, maybe the answer is that one must apply the axioms to the Taylor expansion of sin(x). But how does one define the Taylor expansion of sin(x) ?? Usually, it's though derivatives, but here this leads to a vicious cycle. Thank you for your comments, it's very much appreciated. Patrick 



#12
Jul2706, 07:26 PM

Emeritus
Sci Advisor
PF Gold
P: 16,101

Another way to think about it is this. You know the electrostatic field due to a point charge. You know if you add charges, you simply add the fields. The limit of this "operation" to an arbitrary charge distribution is simply a convolution  i.e. an integral. 



#13
Jul2706, 08:19 PM

Sci Advisor
HW Helper
P: 2,886

My problem is that if I go back to an integral like [itex] x dx [/itex] or *any* integral, I still think of it exactly in the same way: as breaking up into small pieces and taking the limit until the sum converges. But then I am told "no, no, you shoudl not think of the dx there as being something small that is summed over, it is an operator (in standard analysis) or a differential form (in diff geometry). So what is wrong in thinking of all integrals as being broken into a large number of small pieces and summing over? That has worked for all situations I have encountered so far, including doing calculations in thermodynamics, E&M, relativity, etc etc. And that works as well for cases for which the integrand is not exact so that the path matters. I just think of the differential (dx, dV, dq or whatever) as being a very small element of length, volume, charge, whatever. Small enough that the sum converges. And then summing over. Then the question that someone like me obviously encounters when learning about differential forms is "why"? I mean, is it just a neat trick to unify vector calculus identities? Maybe, and that's fine. But the feeling I get is that even when I reach a point of actually carrying the integration, it is wrong to revert back to thinking of the dx (say) as a small (infinitesimal) element. But that's the only way I know of actually carrying an integral! Especially if the integrand is not exact! Regards Patrick 



#14
Jul2706, 10:05 PM

Emeritus
Sci Advisor
PF Gold
P: 16,101

The thing to remember is that all of these things you do to characterize familiar operations like sines, logarithms, and derivatives work in both directions. For example, from the 4 axioms I provided, you can derive differential approximation (and Taylor series!), and then conclude that derivatives can be computed with limits.
If you're curious, if you defined the trig functions as power series, then you would probably wind up defining angle measure via the inverse trig functions, from which their geometric interpretation follows trivially. You could even define two sine functions  one geometrically, and one analytically  and then eventually prove they are equal. Because you lock yourself into that way of thinking. It keeps you from looking at a problem in a way that might be conceptually simpler. And it doesn't work for problems that don't have a density interpretation. One good example is the exterior derivative. It's an obvious thing to do from a purely algebraic perspective. It has a wonderful geometric interpretation ala Stoke's theorem. But I'd be at a total loss if you asked me to describe it pointwise. 



#15
Jul2706, 10:06 PM

Sci Advisor
HW Helper
P: 2,886

become completely unnecessary?? A mathematician would look at those proofs and consider them completely unnecessary? Or plain wrong? And if these proofs are unnecessary, do they work by "chance"? Or are they considered as complete and "convincing" to mathematicians as they are to physicists? This is again the problem I always find myself facing. Mathematicians have a language which is different but at some level *must* be related to the physicist's approach. But different enough that it feels like there the physicist's approahc over here, and the mathematician approach over there, and it's really hard to get anyone even interested in bridging the gap. That's what I am hoping to find help with here. Patrick 



#16
Jul2706, 11:05 PM

Sci Advisor
HW Helper
P: 2,886

Consider the integration over something which is not an exact form, now. Let's say that integral of y dx over some given path. I have no problem defining this by breaking the path over "infinitesimals" and adding the contributions over the path. This is in no way more difficult conceptually than any other integral. Bit how does one think about doing the integral using the language of differential forms if the integrand cannot be written as d(something)?? How does one get the answer? Thanks!! Patrick 



#17
Jul2806, 12:44 AM

Emeritus
Sci Advisor
PF Gold
P: 16,101

Now that I've thought it over more, I realize what I've missed is that I should postulate the mean value theorem as an axiom. So... ' is an operator on a certain class of continuous functions satisfying: (1) f' is continuous (2) If a < b, there exists a c in (a, b) such that: f(b)  f(a) = f'(c) (b  a) (The synthetic treatments I've seen for integration use the mean value theorem for integrals as an axiom, that's why I think I need it here) I suppose with these axioms it's somewhat more clear how to deduce the traditional definition of derivative. Blah, now I'm going to be spending the next few days trying to figure out how to do it without postulating the MVT. I know the rules I mentioned before give you the derivatives of anything you can define algebraically... but I haven't yet figured out what continuity condidion I need to extend it to arbitrary differentiable functions. But that's not the point I was trying to make. We generally aren't interested in breaking things into tiny pieces so that we can sum them, and the like. That's just one means towards computing the thing in which we're really interested. And there are other means. For example, Eudoxus's method of exhaustion. By fixating on the integrals as being sums of tiny pieces, it distracts you from focusing on the things that are really interesting, like what the integral actually computes! IMHO, it's much more important to focus on what something does, than what it is. (Especially since there are so many different, yet equivalent, ways to define what it "is") 



#18
Jul2806, 12:53 AM

Emeritus
Sci Advisor
PF Gold
P: 16,101

In the punctured plane (that is, there's a hole at the origin), there is a differential form w that measures angular distance about the origin. This is not an exact form. So how would I think about integrating this form along a curve? Simple: I compute the angular displacement between the starting and ending points, and adjust it as necessary by counting how many times the curve loops around the origin. That's much simpler than trying to imagine breaking our curve up into little tiny pieces, and then adding up [itex](y \, \Delta x + x \, \Delta y) / (x^2 + y^2)[/itex] over all of them. But, unless I was tasked with actually computing something, I wouldn't even put that much effort into thinking about the integral. All I care about is that "integrating this form gives me angular distance about the origin" and I wouldn't think about it any further. 


Register to reply 
Related Discussions  
Best all time mathematicians/physicists.  General Math  183  
Question for physicists and mathematicians  General Discussion  2  
Some Musings...  Special & General Relativity  44  
Insane physicists and mathematicians  General Discussion  67  
Some musings  General Discussion  9 