Do physics books butcher the math?

Click For Summary
The discussion centers on the differences between the mathematical rigor expected in mathematics versus the practical applications in physics. Participants argue that while physicists often simplify complex mathematics for predictive accuracy, this can lead to a lack of rigorous understanding. The success of theories like quantum electrodynamics (QED) is highlighted as evidence that mathematical soundness is not always necessary for effective physical theories. However, there is a philosophical desire for mathematical rigor to ensure a complete understanding of theories. The conversation concludes with skepticism about the feasibility of achieving rigorous formulations for quantum field theory (QFT) due to the inherent complexities of high-energy phenomena.
  • #151
disregardthat said:
How on Earth would one ever arrive at the intricacies of the current mathematics used in physics without continually backing the process up rigorously? And that's not mentioning how you would even get started without the growing mathematical generalizations which inspire and allows for new ideas to form.

Exactly my point. I don't think there's much more to be said.
 
Physics news on Phys.org
  • #152
I'm not saying that you don't need to have arguments to support your propositions, that is a caricature of what I am saying which is challenging to agree with if you carefully read my statements. What I am stating is that the levels of rigor are irrelevant and unnecessary.

Finite element analysis was invented/developed by the following individuals (according to wikipedia):
Hrennikoff: Civil engineer
Courant: Applied mathematician
Feng: Electrical engineer/mathematician
Rayleigh: Physicist
Ritz: Physicist
Galerkin: Engineer
Argyris: Civil Engineer
Clough: Structural engineer
Zienkiewicz: Civil engineer
Hinton: Civil engineer
Ciarlet: Pure mathematician

Hrennikoff and Courant built off of the work of Rayleigh, Galerkin and Ritz at the turn of the century. It wasn't until ~50-60 years later (depending on where you state the method began) that it was given a rigorous formulation by Strang and Fix.

Later today I will explore what exactly Courant and Ciarlet contributed to the process; did they use powerful theorems from the pure math department, or were they operating in the same way as the civil engineers and the physicists? If it is the former, and if the former clearly was necessary for progress in the field, then I contend that my mind will change. Since you are an expert on numerical methods in PDE's rubi, do you have a quick answer to this question?
 
  • #153
Arsenic&Lace said:
Intriguing, my own opinion of the field wasn't based upon experience, I had simply heard from a peer working in quantum computing at IBM that the theorists/experimentalists there generally felt that it was purely academic and impractical.

Kitaev's topological quantum computation is probably impractical - but are the theorists there really not excited? Microsoft's quantum computing group has quite a few quantum topologists. Maybe it's IBM Microsoft rivalry:) http://research.microsoft.com/en-US/labs/stationq/researchers.aspx
http://arxiv.org/abs/0707.1889
http://arxiv.org/abs/1003.2856
http://arxiv.org/abs/1307.4403

In the Microsoft group, Nayak's work is physicsy enough that I can understand the gist of it.
 
  • #154
Arsenic&Lace said:
I'm not saying that you don't need to have arguments to support your propositions, that is a caricature of what I am saying which is challenging to agree with if you carefully read my statements. What I am stating is that the levels of rigor are irrelevant and unnecessary.
You seem to be unable to understand my reasoning, so I will repeat it one more time:

1. Mathematics is developed by first having a rough idea about what could end up being a theorem.
2. Only those ideas that can be proved to be working survive.
So if you want to have a point, you would have to prove to me that no proposed method for solving PDE's has ever been withdrawn.

It is totally irrelevant, whether the guy who came up with the idea, had the full general rigorous theory in mind, right from the start. Mathematical methods are developed and generalized over years. Even if you have a heuristic method for solving PDE's, it's necessary to know, whether it really converges and how fast it converges (computing power is limited) and whether it is numerically stable (and so on). Show me one such proof that doesn't use rigorous mathematics. You won't find one. All these properties are absolutely essential for applications in engineering. You will be fired instantaneously, if you run non-reliable, slowly converging, numerically unstable algorithms on the supercomputing cluster of your company, because you're wasting their ressources and money.

So here's my concrete challenge: Show me, how we can analyze the speed of convergence of finite elements methods without using rigorous mathematics.


--
Edit: I want to point out that this is the best quote from the thread:
disregardthat said:
What you're suggesting is about as ridiculous as saying that no evidence is ever necessary in physics, because our current theories seem to work pretty well without it.
 
  • #155
Arsenic&Lace said:
Intriguing, my own opinion of the field wasn't based upon experience, I had simply heard from a peer working in quantum computing at IBM that the theorists/experimentalists there generally felt that it was purely academic and impractical. Has anyone attempted to recast it in a more physical light, rather than in terms of formal, obtuse topology? Or is this inefficient/impossible? It was quite a while ago but Feynman's contributions to our understanding of supercooled helium were due to taking a very mathematically convoluted theory from the condensed matter group and trying to make it as simple as possible, in so doing obtaining everything they had and more. But that might not be the case here.

I am not saying that this demonstrates that rigor is always useless, but I think this debate would end extremely quickly if somebody could find a specific example of where, had it not been for formal mathematical rigor, progress in science or engineering would grind to a halt or follow false paths. Grand claims have been made that theories in physics would be a mess without rigor, but no actual evidence has been presented that this is the case. Indeed, I can even provide evidence to the contrary, given that QFT is still not that mathematically rigorous of a theory (to my knowledge).

In regards to the first paragraph above:
Topological insulators are still a long way from practical significance. That does nothing to take away from the interest in them the "fundamental research" point of view. There are topologically DISTINCT states of matter, recently predicted, experimentally confirmed and only now (in the last decade) being explored. The only TRULY SOLVED area is essentially the free electron case. The interplay between strong e-e interactions and spin-orbit coupling is a largely unexplored and extremely exciting (if difficult) area of research (and happens to be my current primary area of interest). Although applications would be wonderful, the primary excitement for me is the exploration not only of a new state of matter but a fundamentally different TYPE of state of matter. That you fail to appreciate this point is troubling.

To your second:
Having mathematical tools that we know are logically self consistent is extremely useful. Not having to ensure that these tools are logically consistent on our own is extremely convenient. I am completely unsure of how you could fail to realize this. I am curious at to where you are at in your physics journey? Are you actively involved in research? What kind? I am somewhat baffled by your responses here.
 
Last edited:
  • #156
So here's my concrete challenge: Show me, how we can analyze the speed of convergence of finite elements methods without using rigorous mathematics.

I mean, I can implement the algorithm in my language of choice (Python or chicken Scheme if I can get away with it, Fortran or C/C++ if I preferred to suffer/needed the performance) and benchmark how much time it takes to converge. If this exceeds my optimization constraints (i.e. if Pointy Haired Boss wants me to have it run in <20 minutes or something) I need to consider implementing a different algorithm or attempting to optimize my existing implementation.

In other words, I could care less if it takes *insert expression here* steps/terms/increments to converge, I only care about the time it takes, a question which can be determined with brute force.

If a mathematician hands me *closed form # of steps expression* that's all well and good, but probably useless given that different architectures, hardware, and languages will muddle any attempts to extract useful information about how long it will take to obtain the precision I need.

If we're at the drawing board and he hands me *expression1* and *expression2* for two different algorithms, it would still be almost certainly easier to just implement algorithm's 1 and 2 and then benchmark them, assuming the first one I tried wasn't quick enough.

In my experience these expressions don't exist. I implemented a Monte Carlo approach to computing perturbation expansions for three-body decays in QED (a triplet pair production reaction, to be precise) last year from scratch and the literature was not very helpful. I've implemented many different algorithms for complex networks/solving SDE's derived from solvent simulations around proteins and apart from complexity classes in the CS papers, we're stuck with straight up brute force benchmarks.

Does this answer your question or do I still not understand it? In short, the answer is that I only care about real time, not number of steps/increments/terms.
 
  • #157
ZombieFeynman said:
In regards to the first paragraph above:
Topological insulators are still a long way from practical significance. That does nothing to take away from the interest in them the "fundamental research" point of view. There are topologically DISTINCT states of matter, recently predicted, experimentally confirmed and only now (in the last decade) being explored. The only TRULY SOLVED area is essentially the free electron case. The interplay between strong e-e interactions and spin-orbit coupling is a largely unexplored and extremely exciting (if difficult) area of research (and happens to be my current primary area of interest). Although applications would be wonderful, the primary excitement for me is the exploration not only of a new state of matter but a fundamentally different TYPE of state of matter. That you fail to appreciate this point is troubling.

To your second:
Having mathematical tools that we know are logically self consistent is extremely useful. Not having to ensure that these tools are logically consistent on our own is extremely convenient. I am completely unsure of how you could fail to realize this. I am curious at to where you are at in your physics journey? Are you actively involved in research? What kind? I am somewhat baffled by your responses here.
For the first paragraph:
Topological matter is very cutting edge stuff. It may be that, much in the way that Feynman made considerable advances by looking for the simplest possible theory, advances in the present field can be made with a similar philosophy. I believe the mathematics should be as complex as it needs to be. If it needs to be as obtuse and difficult as algebraic topology, then so be it. But the jury is probably still out on this point.

For the second:
I am an undergraduate who has been performing (according to my advisor(s)) anyway) PhD level research since the summer of my freshman year.
 
  • #158
Arsenic&Lace said:
I mean, I can implement the algorithm in my language of choice (Python or chicken Scheme if I can get away with it, Fortran or C/C++ if I preferred to suffer/needed the performance) and benchmark how much time it takes to converge. If this exceeds my optimization constraints (i.e. if Pointy Haired Boss wants me to have it run in <20 minutes or something) I need to consider implementing a different algorithm or attempting to optimize my existing implementation.

In other words, I could care less if it takes *insert expression here* steps/terms/increments to converge, I only care about the time it takes, a question which can be determined with brute force.
No, that's wrong. If the algorithm converges fast in one situation, it might be totally inaccurate in another situation with the same number of iterations. No company has the time to test the algorithm for every concrete situation before they use it. That would be pointless. You want to know in advance, which method is better suited for your concrete problem and which method isn't. You don't want to run the algorithm 10 times until you think that the result is close enough to the exact solution. (Of course, you can't know that either without a proof.)

Does this answer your question or do I still not understand it? In short, the answer is that I only care about real time, not number of steps/increments/terms.
Well, it answers the question in the sense that it tells me that you have no idea what you are talking about, if that is what you wanted to know.
 
  • #159
rubi said:
No, that's wrong. If the algorithm converges fast in one situation, it might be totally inaccurate in another situation with the same number of iterations. No company has the time to test the algorithm for every concrete situation before they use it. That would be pointless. You want to know in advance, which method is better suited for your concrete problem and which method isn't. You don't want to run the algorithm 10 times until you think that the result is close enough to the exact solution. (Of course, you can't know that either without a proof.)

Provide a concrete example, otherwise I have no idea if you are merely speculating or not.
 
  • #160
Arsenic&Lace said:
Provide a concrete example, otherwise I have no idea if you are merely speculating or not.
We don't even need a PDE example here (I'm really too lazy to think of one, but i could probably pick any PDE i wanted with a some free parameter that I'd vary). The problem occurs already for ODE's. Simulate a harmonic oscillator with a low frequency and one with a high frequency with the same ##\Delta t## using Euler's method. The higher the frequency, the faster the solution will diverge.
 
  • #161
rubi said:
We don't even need a PDE example here (I'm really too lazy to think of one, but i could probably pick any PDE i wanted with a some free parameter that I'd vary). The problem occurs already for ODE's. Simulate a harmonic oscillator with a low frequency and one with a high frequency with the same ##\Delta t## using Euler's method. The higher the frequency, the faster the solution will diverge.

Right. Or other things to consider: Who says the algorithm will converge at all? Who says the algorithm will converge to the right solution? For example, Newton-Rhapson or fixed point algorithms will not always converge and if they do, they might not give the right solution. Theory is needed to see which is the case.

Or if you want to solve systems of linear equations, who says the very solution you get can be trusted? There are many subtle caveats in these cases where you have ill-conditioned systems. How would you know what ill-conditioned even is without theory?
 
  • #162
micromass said:
Right. Or other things to consider: Who says the algorithm will converge at all? Who says the algorithm will converge to the right solution? For example, Newton-Rhapson or fixed point algorithms will not always converge and if they do, they might not give the right solution. Theory is needed to see which is the case.

Or if you want to solve systems of linear equations, who says the very solution you get can be trusted? There are many subtle caveats in these cases where you have ill-conditioned systems. How would you know what ill-conditioned even is without theory?

There are many properties of matrices and linear operators that one in physics uses without thinking precisely because conscientious mathematicians have meticulously proven many things about them. One needs only to read through Stone and Goldbart's Mathematics for Physics to see many examples of the things we need to prove to make our operators well behaved.

Frankly I think a mixture of naivete and stubbornness is what keeps Arsenic and Lace replying.
 
  • #163
ZombieFeynman said:
One needs only to read through Stone and Goldbart's Mathematics for Physics to see many examples of the things we need to prove to make our operators well behaved.

Awesome, I'll be sure to check out this book since it looks quite good.
 
  • #164
micromass said:
Awesome, I'll be sure to check out this book since it looks quite good.

I think it's the best example of a book which can be somewhat rigorous and yet still be firmly grounded in physics.
 
  • #165
ZombieFeynman said:
The only TRULY SOLVED area is essentially the free electron case. The interplay between strong e-e interactions and spin-orbit coupling is a largely unexplored and extremely exciting (if difficult) area of research (and happens to be my current primary area of interest).

What's the status of symmetry protected topological order? I'd heard it proposed as the concept for the interacting case.
 
  • #166
atyy said:
What's the status of symmetry protected topological order? I'd heard it proposed as the concept for the interacting case.

As far as I'm aware, Xiao-Gang Wen has put out some very nice papers on SPT order in bosonic systems. I must admit, my own focus is quite narrowly in transition metal oxide systems.
 
  • #167
I wonder if a certain someone will change his mind after literally every example of pure mathematics being used in the sciences is misunderstood by him.
 
  • #168
ZombieFeynman said:
As far as I'm aware, Xiao-Gang Wen has put out some very nice papers on SPT order in bosonic systems. I must admit, my own focus is quite narrowly in transition metal oxide systems.

I googled "topological transition metal oxide" and got:
http://arxiv.org/abs/1212.4162
http://arxiv.org/abs/1109.1297

Is it stuff like that you're working on?
 
  • #169
atyy said:
I googled "topological transition metal oxide" and got:
http://arxiv.org/abs/1212.4162
http://arxiv.org/abs/1109.1297

Is it stuff like that you're working on?

At the risk of giving away too many bits of personal information than I'd prefer to, those papers are in very close proximity to my interests.
 
  • Like
Likes 1 person
  • #170
ZombieFeynman said:
At the risk of giving away too many bits of personal information than I'd prefer to, those papers are in very close proximity to my interests.

Ah ha ha, that's cool:)
 
  • #171
I see none of you actually work in computational fields. Earlier in the thread, rubi made statements about how industries somehow rely on brave mathematicians to analyze their algorithms ahead of time to give them theoretical information about convergence so that they can pick the most accurate and timely tools, miraculously ahead of actually using them, without needing to rely on benchmarks. Without surveying the entirety of all industrial output which relies on computations, all I can say is that in my experience, this is grossly inaccurate. The only industry I am intimately familiar with in this regard is that of drug design, where there are a profusion of methods for estimating binding free energies. As I stated previously, the only way to know which is fastest/more accurate and under what circumstances is through brute force benchmarking because the problem is simply too complex. The same is true for finite element analysis, from what I gathered; there is a dominant software package called ANSYS, but it implements multiple convergence algorithms such as hp or XFEM; other packages which implement the same algorithms actually don't perform as well. So in both an additional case and in the case rubi declared I "didn't know what I was talking about" for, it is not possible, merely by studying the structure of the algorithms utilizing powerful mathematics, to make useful predictions about performance. Of course, he is right to say that they don't have the time to benchmark every option they have on the table, but to assume that they rely on pure mathematical theory to avoid this problem is simply untrue; they rely on experiential knowledge.

One needs only to read through Stone and Goldbart's Mathematics for Physics to see many examples of the things we need to prove to make our operators well behaved.
To learn that Hilbert spaces are complete(a completely irrelevant fact)? To prove Parseval's theorem (invented in 1799... put on rigorous foundations more than a century later!)? To learn how to force the delta function to be consistent with the function space framework using the convoluted framework of distributions(in spite of the fact that we can use it perfectly fine for its intended purpose without ever worrying about this)?

Here's a challenge: Find some actual evidence that a). the completeness of Hilbert spaces posed a serious question to physicists at some point, b). doubts about Parseval's theorem posed a serious question to physicists/engineers at some point and c). that the "inconsistencies" of the delta function resulted in spurious results or prevented physicists from actually advancing physics.
 
  • #172
Arsenic&Lace said:
I see none of you actually work in computational fields.

Neither do you, you're just an undergrad. Do you really claim to have such a comprehensive grasp on all computational fields to say what happens and what doesn't happen.

it is not possible, merely by studying the structure of the algorithms utilizing powerful mathematics, to make useful predictions about performance

Are you actually serious or just trolling at this point?

To learn that Hilbert spaces are complete(a completely irrelevant fact)?

I guess you don't know what a Hilbert space is. It's complete by definition. And its completeness is used in QM all the time, although it is usually just swept under the carpet.

Here's a challenge: Find some actual evidence that a). the completeness of Hilbert spaces posed a serious question to physicists at some point, b). doubts about Parseval's theorem posed a serious question to physicists/engineers at some point and c). that the "inconsistencies" of the delta function resulted in spurious results or prevented physicists from actually advancing physics.

Ah, the classic http://en.wikipedia.org/wiki/Straw_man
 
  • #173
micromass said:
Are you actually serious or just trolling at this point?
I guess you don't know what a Hilbert space is. It's complete by definition. And its completeness is used in QM all the time, although it is usually just swept under the carpet.
Ah, the classic http://en.wikipedia.org/wiki/Straw_man
Nope, not trolling; you really can't make useful predictions about performance in the real world using pure mathematics.

The argument I was making regarding Hilbert spaces is that completeness is indeed one of their properties, but that it is a useless property to learn about as a physicist and utterly irrelevant to physical theory.

Nope, not a straw man either, or at least not an intentional one. In general, ZF is stating that rigorous details found in Stone and Goldbart represent useful mathematical definitions or proven theorems for which the level of rigor presented in S&G is necessitated; I contend that this is not the case. For instance, the grotesquely convoluted discussion of the Dirac delta function in the second chapter serves no useful purpose for... anything, really.
 
  • #174
Arsenic&Lace said:
Earlier in the thread, rubi made statements about how industries somehow rely on brave mathematicians to analyze their algorithms ahead of time to give them theoretical information about convergence so that they can pick the most accurate and timely tools, miraculously ahead of actually using them, without needing to rely on benchmarks.
Software like ANSYS just implements algorithms that have been discussed by mathematicians. Of course, they rely on rigorous results proved by mathematicians. They even employ mathematicians. You have to be blind to not see this. Additionally, of course they need to benchmark their software. Software development consists of more than just implementing algorithms. The greatest performance gain is due to the use of efficient algorithms, however. If you use an algorithm of complexity ##O(n^2)## instead of ##O(\log(n))##, then you can optimize as much as you want, it will always be inferior.

...and in the case rubi declared I "didn't know what I was talking about"...
You couldn't even come up with the obvious harmonic oscillator counterexample on your own. I still think you have absolutely no clue what you are talking about.

...but to assume that they rely on pure mathematical theory to avoid this problem is simply untrue; they rely on experiential knowledge.
I never said that they rely purely on mathematics. However, they rely heavily on it.

https://www.youtube.com/watch?v=bIfzyYT1Oho
 
  • #175
Everyone draw your breath in slowly. Ease in, count to 5. Good.

Now exhale slowly, until you feel all the air escape.
________________________________________________

Arsenic, abandon all of your miniscule points of argument. We're debating a broader topic than what you're meandering about. You have actual physicists arguing with, and denying what you're saying. You have actual mathematicians arguing with, and denying what you're saying.

All I ask of you, now, is to reiterate what exactly it is you're arguing against. Because I feel you know it's a lost cause, yet find your only redemption in asking more and more obscured questions, making unreasonable demands of others, until you'll eventually be asking us to explain how the ontological topological heuristics of a non-orthogonal cauchy sequence permeates 3-dimensionally upon a four-sided Mobius strip had any relevance or pertinence in the making of Newton's Laws of Motion.
 
  • #176
Arsenic&Lace said:
Nope, not trolling; you really can't make useful predictions about performance in the real world using pure mathematics.
You seem too fixated on how your group does things. In my group (computational) we do make use of rigorous results, such as the guarantee that metadynamics converges asymptotically, or the simple fact that certain algorithms scale like O(n^a). We don't just blindly use any numerical solver, we do pick the ones that are known to work better.
 
  • #177
AnTiFreeze3 said:
Arsenic, abandon all of your miniscule points of argument. We're debating a broader topic than what you're meandering about. You have actual physicists arguing with, and denying what you're saying. You have actual mathematicians arguing with, and denying what you're saying.

I am also in disagreement with A&L, however, I don't think an argument from authority is a good way to proceed.
 
  • #178
ZombieFeynman said:
I am also in disagreement with A&L, however, I don't think an argument from authority is a good way to proceed.

Not all arguments from authority are fallacious arguments. When the authority is a relevant authority, it's a fairly good argument overall. And in this case, the authority in question is about as relevant as you can get.
 
  • #179
Arsenic&Lace said:
Nope, not trolling; you really can't make useful predictions about performance in the real world using pure mathematics.

The argument I was making regarding Hilbert spaces is that completeness is indeed one of their properties, but that it is a useless property to learn about as a physicist and utterly irrelevant to physical theory.

Nope, not a straw man either, or at least not an intentional one. In general, ZF is stating that rigorous details found in Stone and Goldbart represent useful mathematical definitions or proven theorems for which the level of rigor presented in S&G is necessitated; I contend that this is not the case. For instance, the grotesquely convoluted discussion of the Dirac delta function in the second chapter serves no useful purpose for... anything, really.

Most of your posts seem to read "I haven't had to use this and don't think I will have to, therefore no one does!"

Char. Limit said:
Not all arguments from authority are fallacious arguments. When the authority is a relevant authority, it's a fairly good argument overall. And in this case, the authority in question is about as relevant as you can get.

I'm not saying it's fallacious, I simply think that it's not needed here.
 
  • #180
ZombieFeynman said:
I am also in disagreement with A&L, however, I don't think an argument from authority is a good way to proceed.

Fair enough. But I do think to some respects that an undergraduate ought to understand that those with more research experience at the graduate levels and beyond likely know what they're talking about, and rather than ignoring what they say and pursuing vapid points, he ought to take it as evidence that he may be wrong.
 

Similar threads

Replies
3
Views
1K
Replies
12
Views
2K
  • · Replies 8 ·
Replies
8
Views
369
  • · Replies 5 ·
Replies
5
Views
3K
  • · Replies 6 ·
Replies
6
Views
2K
  • · Replies 99 ·
4
Replies
99
Views
7K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 5 ·
Replies
5
Views
4K
  • · Replies 7 ·
Replies
7
Views
4K
  • · Replies 11 ·
Replies
11
Views
2K