Do physics books butcher the math?

AI Thread Summary
The discussion centers on the differences between the mathematical rigor expected in mathematics versus the practical applications in physics. Participants argue that while physicists often simplify complex mathematics for predictive accuracy, this can lead to a lack of rigorous understanding. The success of theories like quantum electrodynamics (QED) is highlighted as evidence that mathematical soundness is not always necessary for effective physical theories. However, there is a philosophical desire for mathematical rigor to ensure a complete understanding of theories. The conversation concludes with skepticism about the feasibility of achieving rigorous formulations for quantum field theory (QFT) due to the inherent complexities of high-energy phenomena.
  • #151
disregardthat said:
How on Earth would one ever arrive at the intricacies of the current mathematics used in physics without continually backing the process up rigorously? And that's not mentioning how you would even get started without the growing mathematical generalizations which inspire and allows for new ideas to form.

Exactly my point. I don't think there's much more to be said.
 
Physics news on Phys.org
  • #152
I'm not saying that you don't need to have arguments to support your propositions, that is a caricature of what I am saying which is challenging to agree with if you carefully read my statements. What I am stating is that the levels of rigor are irrelevant and unnecessary.

Finite element analysis was invented/developed by the following individuals (according to wikipedia):
Hrennikoff: Civil engineer
Courant: Applied mathematician
Feng: Electrical engineer/mathematician
Rayleigh: Physicist
Ritz: Physicist
Galerkin: Engineer
Argyris: Civil Engineer
Clough: Structural engineer
Zienkiewicz: Civil engineer
Hinton: Civil engineer
Ciarlet: Pure mathematician

Hrennikoff and Courant built off of the work of Rayleigh, Galerkin and Ritz at the turn of the century. It wasn't until ~50-60 years later (depending on where you state the method began) that it was given a rigorous formulation by Strang and Fix.

Later today I will explore what exactly Courant and Ciarlet contributed to the process; did they use powerful theorems from the pure math department, or were they operating in the same way as the civil engineers and the physicists? If it is the former, and if the former clearly was necessary for progress in the field, then I contend that my mind will change. Since you are an expert on numerical methods in PDE's rubi, do you have a quick answer to this question?
 
  • #153
Arsenic&Lace said:
Intriguing, my own opinion of the field wasn't based upon experience, I had simply heard from a peer working in quantum computing at IBM that the theorists/experimentalists there generally felt that it was purely academic and impractical.

Kitaev's topological quantum computation is probably impractical - but are the theorists there really not excited? Microsoft's quantum computing group has quite a few quantum topologists. Maybe it's IBM Microsoft rivalry:) http://research.microsoft.com/en-US/labs/stationq/researchers.aspx
http://arxiv.org/abs/0707.1889
http://arxiv.org/abs/1003.2856
http://arxiv.org/abs/1307.4403

In the Microsoft group, Nayak's work is physicsy enough that I can understand the gist of it.
 
  • #154
Arsenic&Lace said:
I'm not saying that you don't need to have arguments to support your propositions, that is a caricature of what I am saying which is challenging to agree with if you carefully read my statements. What I am stating is that the levels of rigor are irrelevant and unnecessary.
You seem to be unable to understand my reasoning, so I will repeat it one more time:

1. Mathematics is developed by first having a rough idea about what could end up being a theorem.
2. Only those ideas that can be proved to be working survive.
So if you want to have a point, you would have to prove to me that no proposed method for solving PDE's has ever been withdrawn.

It is totally irrelevant, whether the guy who came up with the idea, had the full general rigorous theory in mind, right from the start. Mathematical methods are developed and generalized over years. Even if you have a heuristic method for solving PDE's, it's necessary to know, whether it really converges and how fast it converges (computing power is limited) and whether it is numerically stable (and so on). Show me one such proof that doesn't use rigorous mathematics. You won't find one. All these properties are absolutely essential for applications in engineering. You will be fired instantaneously, if you run non-reliable, slowly converging, numerically unstable algorithms on the supercomputing cluster of your company, because you're wasting their ressources and money.

So here's my concrete challenge: Show me, how we can analyze the speed of convergence of finite elements methods without using rigorous mathematics.


--
Edit: I want to point out that this is the best quote from the thread:
disregardthat said:
What you're suggesting is about as ridiculous as saying that no evidence is ever necessary in physics, because our current theories seem to work pretty well without it.
 
  • #155
Arsenic&Lace said:
Intriguing, my own opinion of the field wasn't based upon experience, I had simply heard from a peer working in quantum computing at IBM that the theorists/experimentalists there generally felt that it was purely academic and impractical. Has anyone attempted to recast it in a more physical light, rather than in terms of formal, obtuse topology? Or is this inefficient/impossible? It was quite a while ago but Feynman's contributions to our understanding of supercooled helium were due to taking a very mathematically convoluted theory from the condensed matter group and trying to make it as simple as possible, in so doing obtaining everything they had and more. But that might not be the case here.

I am not saying that this demonstrates that rigor is always useless, but I think this debate would end extremely quickly if somebody could find a specific example of where, had it not been for formal mathematical rigor, progress in science or engineering would grind to a halt or follow false paths. Grand claims have been made that theories in physics would be a mess without rigor, but no actual evidence has been presented that this is the case. Indeed, I can even provide evidence to the contrary, given that QFT is still not that mathematically rigorous of a theory (to my knowledge).

In regards to the first paragraph above:
Topological insulators are still a long way from practical significance. That does nothing to take away from the interest in them the "fundamental research" point of view. There are topologically DISTINCT states of matter, recently predicted, experimentally confirmed and only now (in the last decade) being explored. The only TRULY SOLVED area is essentially the free electron case. The interplay between strong e-e interactions and spin-orbit coupling is a largely unexplored and extremely exciting (if difficult) area of research (and happens to be my current primary area of interest). Although applications would be wonderful, the primary excitement for me is the exploration not only of a new state of matter but a fundamentally different TYPE of state of matter. That you fail to appreciate this point is troubling.

To your second:
Having mathematical tools that we know are logically self consistent is extremely useful. Not having to ensure that these tools are logically consistent on our own is extremely convenient. I am completely unsure of how you could fail to realize this. I am curious at to where you are at in your physics journey? Are you actively involved in research? What kind? I am somewhat baffled by your responses here.
 
Last edited:
  • #156
So here's my concrete challenge: Show me, how we can analyze the speed of convergence of finite elements methods without using rigorous mathematics.

I mean, I can implement the algorithm in my language of choice (Python or chicken Scheme if I can get away with it, Fortran or C/C++ if I preferred to suffer/needed the performance) and benchmark how much time it takes to converge. If this exceeds my optimization constraints (i.e. if Pointy Haired Boss wants me to have it run in <20 minutes or something) I need to consider implementing a different algorithm or attempting to optimize my existing implementation.

In other words, I could care less if it takes *insert expression here* steps/terms/increments to converge, I only care about the time it takes, a question which can be determined with brute force.

If a mathematician hands me *closed form # of steps expression* that's all well and good, but probably useless given that different architectures, hardware, and languages will muddle any attempts to extract useful information about how long it will take to obtain the precision I need.

If we're at the drawing board and he hands me *expression1* and *expression2* for two different algorithms, it would still be almost certainly easier to just implement algorithm's 1 and 2 and then benchmark them, assuming the first one I tried wasn't quick enough.

In my experience these expressions don't exist. I implemented a Monte Carlo approach to computing perturbation expansions for three-body decays in QED (a triplet pair production reaction, to be precise) last year from scratch and the literature was not very helpful. I've implemented many different algorithms for complex networks/solving SDE's derived from solvent simulations around proteins and apart from complexity classes in the CS papers, we're stuck with straight up brute force benchmarks.

Does this answer your question or do I still not understand it? In short, the answer is that I only care about real time, not number of steps/increments/terms.
 
  • #157
ZombieFeynman said:
In regards to the first paragraph above:
Topological insulators are still a long way from practical significance. That does nothing to take away from the interest in them the "fundamental research" point of view. There are topologically DISTINCT states of matter, recently predicted, experimentally confirmed and only now (in the last decade) being explored. The only TRULY SOLVED area is essentially the free electron case. The interplay between strong e-e interactions and spin-orbit coupling is a largely unexplored and extremely exciting (if difficult) area of research (and happens to be my current primary area of interest). Although applications would be wonderful, the primary excitement for me is the exploration not only of a new state of matter but a fundamentally different TYPE of state of matter. That you fail to appreciate this point is troubling.

To your second:
Having mathematical tools that we know are logically self consistent is extremely useful. Not having to ensure that these tools are logically consistent on our own is extremely convenient. I am completely unsure of how you could fail to realize this. I am curious at to where you are at in your physics journey? Are you actively involved in research? What kind? I am somewhat baffled by your responses here.
For the first paragraph:
Topological matter is very cutting edge stuff. It may be that, much in the way that Feynman made considerable advances by looking for the simplest possible theory, advances in the present field can be made with a similar philosophy. I believe the mathematics should be as complex as it needs to be. If it needs to be as obtuse and difficult as algebraic topology, then so be it. But the jury is probably still out on this point.

For the second:
I am an undergraduate who has been performing (according to my advisor(s)) anyway) PhD level research since the summer of my freshman year.
 
  • #158
Arsenic&Lace said:
I mean, I can implement the algorithm in my language of choice (Python or chicken Scheme if I can get away with it, Fortran or C/C++ if I preferred to suffer/needed the performance) and benchmark how much time it takes to converge. If this exceeds my optimization constraints (i.e. if Pointy Haired Boss wants me to have it run in <20 minutes or something) I need to consider implementing a different algorithm or attempting to optimize my existing implementation.

In other words, I could care less if it takes *insert expression here* steps/terms/increments to converge, I only care about the time it takes, a question which can be determined with brute force.
No, that's wrong. If the algorithm converges fast in one situation, it might be totally inaccurate in another situation with the same number of iterations. No company has the time to test the algorithm for every concrete situation before they use it. That would be pointless. You want to know in advance, which method is better suited for your concrete problem and which method isn't. You don't want to run the algorithm 10 times until you think that the result is close enough to the exact solution. (Of course, you can't know that either without a proof.)

Does this answer your question or do I still not understand it? In short, the answer is that I only care about real time, not number of steps/increments/terms.
Well, it answers the question in the sense that it tells me that you have no idea what you are talking about, if that is what you wanted to know.
 
  • #159
rubi said:
No, that's wrong. If the algorithm converges fast in one situation, it might be totally inaccurate in another situation with the same number of iterations. No company has the time to test the algorithm for every concrete situation before they use it. That would be pointless. You want to know in advance, which method is better suited for your concrete problem and which method isn't. You don't want to run the algorithm 10 times until you think that the result is close enough to the exact solution. (Of course, you can't know that either without a proof.)

Provide a concrete example, otherwise I have no idea if you are merely speculating or not.
 
  • #160
Arsenic&Lace said:
Provide a concrete example, otherwise I have no idea if you are merely speculating or not.
We don't even need a PDE example here (I'm really too lazy to think of one, but i could probably pick any PDE i wanted with a some free parameter that I'd vary). The problem occurs already for ODE's. Simulate a harmonic oscillator with a low frequency and one with a high frequency with the same ##\Delta t## using Euler's method. The higher the frequency, the faster the solution will diverge.
 
  • #161
rubi said:
We don't even need a PDE example here (I'm really too lazy to think of one, but i could probably pick any PDE i wanted with a some free parameter that I'd vary). The problem occurs already for ODE's. Simulate a harmonic oscillator with a low frequency and one with a high frequency with the same ##\Delta t## using Euler's method. The higher the frequency, the faster the solution will diverge.

Right. Or other things to consider: Who says the algorithm will converge at all? Who says the algorithm will converge to the right solution? For example, Newton-Rhapson or fixed point algorithms will not always converge and if they do, they might not give the right solution. Theory is needed to see which is the case.

Or if you want to solve systems of linear equations, who says the very solution you get can be trusted? There are many subtle caveats in these cases where you have ill-conditioned systems. How would you know what ill-conditioned even is without theory?
 
  • #162
micromass said:
Right. Or other things to consider: Who says the algorithm will converge at all? Who says the algorithm will converge to the right solution? For example, Newton-Rhapson or fixed point algorithms will not always converge and if they do, they might not give the right solution. Theory is needed to see which is the case.

Or if you want to solve systems of linear equations, who says the very solution you get can be trusted? There are many subtle caveats in these cases where you have ill-conditioned systems. How would you know what ill-conditioned even is without theory?

There are many properties of matrices and linear operators that one in physics uses without thinking precisely because conscientious mathematicians have meticulously proven many things about them. One needs only to read through Stone and Goldbart's Mathematics for Physics to see many examples of the things we need to prove to make our operators well behaved.

Frankly I think a mixture of naivete and stubbornness is what keeps Arsenic and Lace replying.
 
  • #163
ZombieFeynman said:
One needs only to read through Stone and Goldbart's Mathematics for Physics to see many examples of the things we need to prove to make our operators well behaved.

Awesome, I'll be sure to check out this book since it looks quite good.
 
  • #164
micromass said:
Awesome, I'll be sure to check out this book since it looks quite good.

I think it's the best example of a book which can be somewhat rigorous and yet still be firmly grounded in physics.
 
  • #165
ZombieFeynman said:
The only TRULY SOLVED area is essentially the free electron case. The interplay between strong e-e interactions and spin-orbit coupling is a largely unexplored and extremely exciting (if difficult) area of research (and happens to be my current primary area of interest).

What's the status of symmetry protected topological order? I'd heard it proposed as the concept for the interacting case.
 
  • #166
atyy said:
What's the status of symmetry protected topological order? I'd heard it proposed as the concept for the interacting case.

As far as I'm aware, Xiao-Gang Wen has put out some very nice papers on SPT order in bosonic systems. I must admit, my own focus is quite narrowly in transition metal oxide systems.
 
  • #167
I wonder if a certain someone will change his mind after literally every example of pure mathematics being used in the sciences is misunderstood by him.
 
  • #168
ZombieFeynman said:
As far as I'm aware, Xiao-Gang Wen has put out some very nice papers on SPT order in bosonic systems. I must admit, my own focus is quite narrowly in transition metal oxide systems.

I googled "topological transition metal oxide" and got:
http://arxiv.org/abs/1212.4162
http://arxiv.org/abs/1109.1297

Is it stuff like that you're working on?
 
  • #169
atyy said:
I googled "topological transition metal oxide" and got:
http://arxiv.org/abs/1212.4162
http://arxiv.org/abs/1109.1297

Is it stuff like that you're working on?

At the risk of giving away too many bits of personal information than I'd prefer to, those papers are in very close proximity to my interests.
 
  • Like
Likes 1 person
  • #170
ZombieFeynman said:
At the risk of giving away too many bits of personal information than I'd prefer to, those papers are in very close proximity to my interests.

Ah ha ha, that's cool:)
 
  • #171
I see none of you actually work in computational fields. Earlier in the thread, rubi made statements about how industries somehow rely on brave mathematicians to analyze their algorithms ahead of time to give them theoretical information about convergence so that they can pick the most accurate and timely tools, miraculously ahead of actually using them, without needing to rely on benchmarks. Without surveying the entirety of all industrial output which relies on computations, all I can say is that in my experience, this is grossly inaccurate. The only industry I am intimately familiar with in this regard is that of drug design, where there are a profusion of methods for estimating binding free energies. As I stated previously, the only way to know which is fastest/more accurate and under what circumstances is through brute force benchmarking because the problem is simply too complex. The same is true for finite element analysis, from what I gathered; there is a dominant software package called ANSYS, but it implements multiple convergence algorithms such as hp or XFEM; other packages which implement the same algorithms actually don't perform as well. So in both an additional case and in the case rubi declared I "didn't know what I was talking about" for, it is not possible, merely by studying the structure of the algorithms utilizing powerful mathematics, to make useful predictions about performance. Of course, he is right to say that they don't have the time to benchmark every option they have on the table, but to assume that they rely on pure mathematical theory to avoid this problem is simply untrue; they rely on experiential knowledge.

One needs only to read through Stone and Goldbart's Mathematics for Physics to see many examples of the things we need to prove to make our operators well behaved.
To learn that Hilbert spaces are complete(a completely irrelevant fact)? To prove Parseval's theorem (invented in 1799... put on rigorous foundations more than a century later!)? To learn how to force the delta function to be consistent with the function space framework using the convoluted framework of distributions(in spite of the fact that we can use it perfectly fine for its intended purpose without ever worrying about this)?

Here's a challenge: Find some actual evidence that a). the completeness of Hilbert spaces posed a serious question to physicists at some point, b). doubts about Parseval's theorem posed a serious question to physicists/engineers at some point and c). that the "inconsistencies" of the delta function resulted in spurious results or prevented physicists from actually advancing physics.
 
  • #172
Arsenic&Lace said:
I see none of you actually work in computational fields.

Neither do you, you're just an undergrad. Do you really claim to have such a comprehensive grasp on all computational fields to say what happens and what doesn't happen.

it is not possible, merely by studying the structure of the algorithms utilizing powerful mathematics, to make useful predictions about performance

Are you actually serious or just trolling at this point?

To learn that Hilbert spaces are complete(a completely irrelevant fact)?

I guess you don't know what a Hilbert space is. It's complete by definition. And its completeness is used in QM all the time, although it is usually just swept under the carpet.

Here's a challenge: Find some actual evidence that a). the completeness of Hilbert spaces posed a serious question to physicists at some point, b). doubts about Parseval's theorem posed a serious question to physicists/engineers at some point and c). that the "inconsistencies" of the delta function resulted in spurious results or prevented physicists from actually advancing physics.

Ah, the classic http://en.wikipedia.org/wiki/Straw_man
 
  • #173
micromass said:
Are you actually serious or just trolling at this point?
I guess you don't know what a Hilbert space is. It's complete by definition. And its completeness is used in QM all the time, although it is usually just swept under the carpet.
Ah, the classic http://en.wikipedia.org/wiki/Straw_man
Nope, not trolling; you really can't make useful predictions about performance in the real world using pure mathematics.

The argument I was making regarding Hilbert spaces is that completeness is indeed one of their properties, but that it is a useless property to learn about as a physicist and utterly irrelevant to physical theory.

Nope, not a straw man either, or at least not an intentional one. In general, ZF is stating that rigorous details found in Stone and Goldbart represent useful mathematical definitions or proven theorems for which the level of rigor presented in S&G is necessitated; I contend that this is not the case. For instance, the grotesquely convoluted discussion of the Dirac delta function in the second chapter serves no useful purpose for... anything, really.
 
  • #174
Arsenic&Lace said:
Earlier in the thread, rubi made statements about how industries somehow rely on brave mathematicians to analyze their algorithms ahead of time to give them theoretical information about convergence so that they can pick the most accurate and timely tools, miraculously ahead of actually using them, without needing to rely on benchmarks.
Software like ANSYS just implements algorithms that have been discussed by mathematicians. Of course, they rely on rigorous results proved by mathematicians. They even employ mathematicians. You have to be blind to not see this. Additionally, of course they need to benchmark their software. Software development consists of more than just implementing algorithms. The greatest performance gain is due to the use of efficient algorithms, however. If you use an algorithm of complexity ##O(n^2)## instead of ##O(\log(n))##, then you can optimize as much as you want, it will always be inferior.

...and in the case rubi declared I "didn't know what I was talking about"...
You couldn't even come up with the obvious harmonic oscillator counterexample on your own. I still think you have absolutely no clue what you are talking about.

...but to assume that they rely on pure mathematical theory to avoid this problem is simply untrue; they rely on experiential knowledge.
I never said that they rely purely on mathematics. However, they rely heavily on it.

https://www.youtube.com/watch?v=bIfzyYT1Oho
 
  • #175
Everyone draw your breath in slowly. Ease in, count to 5. Good.

Now exhale slowly, until you feel all the air escape.
________________________________________________

Arsenic, abandon all of your miniscule points of argument. We're debating a broader topic than what you're meandering about. You have actual physicists arguing with, and denying what you're saying. You have actual mathematicians arguing with, and denying what you're saying.

All I ask of you, now, is to reiterate what exactly it is you're arguing against. Because I feel you know it's a lost cause, yet find your only redemption in asking more and more obscured questions, making unreasonable demands of others, until you'll eventually be asking us to explain how the ontological topological heuristics of a non-orthogonal cauchy sequence permeates 3-dimensionally upon a four-sided Mobius strip had any relevance or pertinence in the making of Newton's Laws of Motion.
 
  • #176
Arsenic&Lace said:
Nope, not trolling; you really can't make useful predictions about performance in the real world using pure mathematics.
You seem too fixated on how your group does things. In my group (computational) we do make use of rigorous results, such as the guarantee that metadynamics converges asymptotically, or the simple fact that certain algorithms scale like O(n^a). We don't just blindly use any numerical solver, we do pick the ones that are known to work better.
 
  • #177
AnTiFreeze3 said:
Arsenic, abandon all of your miniscule points of argument. We're debating a broader topic than what you're meandering about. You have actual physicists arguing with, and denying what you're saying. You have actual mathematicians arguing with, and denying what you're saying.

I am also in disagreement with A&L, however, I don't think an argument from authority is a good way to proceed.
 
  • #178
ZombieFeynman said:
I am also in disagreement with A&L, however, I don't think an argument from authority is a good way to proceed.

Not all arguments from authority are fallacious arguments. When the authority is a relevant authority, it's a fairly good argument overall. And in this case, the authority in question is about as relevant as you can get.
 
  • #179
Arsenic&Lace said:
Nope, not trolling; you really can't make useful predictions about performance in the real world using pure mathematics.

The argument I was making regarding Hilbert spaces is that completeness is indeed one of their properties, but that it is a useless property to learn about as a physicist and utterly irrelevant to physical theory.

Nope, not a straw man either, or at least not an intentional one. In general, ZF is stating that rigorous details found in Stone and Goldbart represent useful mathematical definitions or proven theorems for which the level of rigor presented in S&G is necessitated; I contend that this is not the case. For instance, the grotesquely convoluted discussion of the Dirac delta function in the second chapter serves no useful purpose for... anything, really.

Most of your posts seem to read "I haven't had to use this and don't think I will have to, therefore no one does!"

Char. Limit said:
Not all arguments from authority are fallacious arguments. When the authority is a relevant authority, it's a fairly good argument overall. And in this case, the authority in question is about as relevant as you can get.

I'm not saying it's fallacious, I simply think that it's not needed here.
 
  • #180
ZombieFeynman said:
I am also in disagreement with A&L, however, I don't think an argument from authority is a good way to proceed.

Fair enough. But I do think to some respects that an undergraduate ought to understand that those with more research experience at the graduate levels and beyond likely know what they're talking about, and rather than ignoring what they say and pursuing vapid points, he ought to take it as evidence that he may be wrong.
 
  • #181
Arsenic&Lace said:
c). that the "inconsistencies" of the delta function resulted in spurious results or prevented physicists from actually advancing physics.

http://arxiv.org/abs/quant-ph/0303094
 
  • #182
AnTiFreeze3 said:
Fair enough. But I do think to some respects that an undergraduate ought to understand that those with more research experience at the graduate levels and beyond likely know what they're talking about, and rather than ignoring what they say and pursuing vapid points, he ought to take it as evidence that he may be wrong.

We in the sciences should be encouraged to question authority. However, it's not always the most productive rout; unless one is a genius it may lead to a lot of headaches and wasted time.
 
  • #183
Arsenic&Lace said:
The argument I was making regarding Hilbert spaces is that completeness is indeed one of their properties, but that it is a useless property to learn about as a physicist and utterly irrelevant to physical theory.

So something like ##\sum |\psi><\psi| = I## is seen as useless and utterly irrelevant nowadays?
 
  • #184
micromass said:
So something like ##\sum |\psi><\psi| = I## is seen as useless and utterly irrelevant nowadays?
I can answer that! NO!

I use resolutions to identity with great regularity.
 
  • #185
ZombieFeynman said:
I can answer that! NO!

I use resolutions to identity with great regularity.
But Arsenic&Lance doesn't use them, so they are useless.
 
  • #186
micromass said:
I guess you don't know what a Hilbert space is. It's complete by definition. And its completeness is used in QM all the time, although it is usually just swept under the carpet.

Arsenic&Lace said:
The argument I was making regarding Hilbert spaces is that completeness is indeed one of their properties, but that it is a useless property to learn about as a physicist and utterly irrelevant to physical theory.

Isn't this ##\Sigma|n \rangle \langle n| = 1?##
 
  • #187
rubi said:
But Arsenic&Lance doesn't use them, so they are useless.

This is the best reply of this thread :-p

atyy said:
Isn't this ##\Sigma|n \rangle \langle n| = 1?##

Could very well be. I'm not really good in braket notation. Thanks for the correction.
 
  • #188
micromass said:
This is the best reply of this thread :-p



Could very well be. I'm not really good in braket notation. Thanks for the correction.

As long as $\ket{n}$ or $\ket{\psi}$ is summed over a complete set of states it doesn't matter!

I don't know how the forum latex thing works = (
 
  • #189
ZombieFeynman said:
We in the sciences should be encouraged to question authority. However, it's not always the most productive rout; unless one is a genius it may lead to a lot of headaches and wasted time.

Well I certainly have a headache from this thread :smile:
 
  • #190
rubi said:
Software like ANSYS just implements algorithms that have been discussed by mathematicians. Of course, they rely on rigorous results proved by mathematicians. They even employ mathematicians. You have to be blind to not see this. Additionally, of course they need to benchmark their software. Software development consists of more than just implementing algorithms. The greatest performance gain is due to the use of efficient algorithms, however. If you use an algorithm of complexity ##O(n^2)## instead of ##O(\log(n))##, then you can optimize as much as you want, it will always be inferior.


You couldn't even come up with the obvious harmonic oscillator counterexample on your own. I still think you have absolutely no clue what you are talking about.
Here's what I remember of the discussion, correct me if I'm wrong:
rubi: I challenge you to determine the speed at which an algorithm converges without pure mathematics.
Arsenic: This question is irrelevant, in the real world we only use benchmarks.
rubi: Corporations use theoretical methods to determine speed because it is too costly to benchmark.
Arsenic: My industrial/academic experience is that it is impossible to develop theoretical methods in most cases so benchmarking is used instead (I'd add after the fact that it really isn't that hard to benchmark multiple packages).
rubi: The corporation you cited uses theoretical methods to determine the speed of the algorithms.

This is now an empirical question.

All I ask of you, now, is to reiterate what exactly it is you're arguing against. Because I feel you know it's a lost cause, yet find your only redemption in asking more and more obscured questions,
I'm arguing against many things, but I'll pick a couple and briefly list the conditions under which my views will change so that people can decide if they are completely unreasonable or not.

1. The levels of mathematical rigor employed by mathematicians serve no useful purpose for the practitioners of mathematics.

My mind would change if someone could provide an empirical example of where rigorous proofs actually aided the development of applied disciplines.

2. That it is pointless to divorce mathematics from its applications.

It seems to me that extremely general reasoning about say, PDE's, has produced nothing of use. There are numerous grand theorems, but these are ignored outside the math department because people actually studying real PDE's realize that there is very little separating the symbolic expression of the problem from the underlying physics/real world rules.

Of course, if one could show that the powerful theorems learned in a pure PDE's course are actually helpful to applied mathematicians, I would change my mind.

micromass said:
So something like ##\sum |\psi><\psi| = I## is seen as useless and utterly irrelevant nowadays?
The notion of completeness carries much more baggage than this. One can understand the value of this expression simply by analogy to orthonormal vector spaces. I had in mind more mathematical notions such as the fact that every Cauchy sequence in a complete metric space converges to a value in that metric space.

ZombieFeynman said:
Most of your posts seem to read "I haven't had to use this and don't think I will have to, therefore no one does!"
You should consider reading them more carefully then.
 
  • #191
Arsenic&Lace said:
The notion of completeness carries much more baggage than this. One can understand the value of this expression simply by analogy to orthonormal vector spaces. I had in mind more mathematical notions such as the fact that every Cauchy sequence in a complete metric space converges to a value in that metric space.

Saying that a Hilbert space is complete is exactly the same as saying that ##\sum |\psi><\psi| = I##. So it doesn't carry any more baggage.
 
  • #192
Arsenic&Lace said:
The notion of completeness carries much more baggage than this. One can understand the value of this expression simply by analogy to orthonormal vector spaces. I had in mind more mathematical notions such as the fact that every Cauchy sequence in a complete metric space converges to a value in that metric space.

Do all notions from finite dimensional vector spaces carry over to the infinite dimensional case? (Hint: no) How do you know which ones do and don't without rigorous mathematics? Cantor showed the intrinsic non-intuitiveness of sets with infinite and (moreso!) with uncountable cardinalities. I'd be seriously careful here.

I challenge you to prove that you can have two canonically conjugate matrices A and B in finite dimensional space (akin to momentum and position).

ie AB - BA is the identity, up to a constant.

Before you waste too much of your night on it, it's impossible. I double dog dare you to convince me of that without being...rigorous.
 
Last edited:
  • #193
Arsenic&Lace said:
My mind would change if someone could provide an empirical example of where rigorous proofs actually aided the development of applied disciplines.

It's hard to change your mind if you ignore most of the examples we give. But again, consider wavelets.
 
  • #194
Arsenic&Lace said:
Here's what I remember of the discussion, correct me if I'm wrong:
...
This is now an empirical question.
I just argued that mathematical rigour is essential for the development of numerical PDE methods and this is undeniable. It is unthinkable that a software package like ANSYS would yield reliable results if it didn't depend heavily on rigorous results. Anyone who has the slightest idea of how these packages work, will agree with this. If you don't believe it (which would be totally ridiculous), go ahead and check out some of the open source FEM packages. There are plenty. I won't help you though, because it is a waste of my time.
 
  • #195
Arsenic&Lace said:
My mind would change if someone could provide an empirical example of where rigorous proofs actually aided the development of applied disciplines.

These examples have been provided before (and there are many, many more examples still to be named as well), but these are all pretty concrete and you can verify each of them by pretty much asking anyone working in these fields.
  1. If you work in finance or data analysis or do computer science focusing on machine learning (three very important industries these days) you are going to be needing techniques from stochastic calculus that were only possible because of rigorous foundations. Most of the important results were not "intuited" first, as they were in ordinary calculus, but only arose as the relevant proofs came with them.
  2. If you work in economics or again finance there is good chance you will be needing fixed-point theorems whose development required rigorous proofs. Things like the Brouwer and Kakutani fixed-point theorems were utilized to establish various equilibria phenomenon (like Nash equilibrium) that have since become staples in the industry.
  3. If you work in some of the cutting-edge data analyst groups you will be needing lots of tools from algebraic topology for topological data analysis. Here you actually need quite a bit of machinery like knowledge of various (co)homology theories, their connections with cobordism and Morse theory, spectral sequences, etc.
If you pay attention to all of the examples people have been giving, rather than honing in one or two (like the applications of algebraic topology in condensed matter theory) you would see there is a big real world market for results from pure mathematics.

Edit: Just to list a few more applications off the top of my head, you should check out: stochastic calculus in statistical physics; number theory and algebraic geometry in cryptography with stuff like elliptic curves; algebraic topology in biology for modeling protein structure and interactions; category theory in computer science.
 
Last edited:
  • #196
I would ask for this thread to be closed because at this point it is akin to a cowering cat cornered by a gang of dogs closing in for the kill but I feel like too many people are getting entertainment value out of it.
 
  • #197
WannabeNewton said:
I would ask for this thread to be closed because at this point it is akin to a cowering cat cornered by a gang of dogs closing in for the kill but I feel like too many people are getting entertainment value out of it.

I would ask for it to stay open. I think it is (somewhat) intellectually dishonest to declare victory and close up shop. As long as no forum rules are being broken, I don't see why it should not remain opened.
 
  • #198
ZombieFeynman said:
I am also in disagreement with A&L, however, I don't think an argument from authority is a good way to proceed.
I read it more like, 'everyone disagrees with you, perhaps you should reconsider your view.'
 
  • #199
One more thing: Why does pure mathematics need applications? Would you say someone who studies art for 50 years isn't an expert on art because "there's no applications for their work"?
 
  • #200
WannabeNewton said:
I would ask for this thread to be closed because at this point it is akin to a cowering cat cornered by a gang of dogs closing in for the kill but I feel like too many people are getting entertainment value out of it.
Well I'm enjoying this thread so I hope it continues.

Firstly my apologies to jergen, micromass, and others for not examining each and every application in detail. I promise I'm not trying to cherry pick applications which are easiest for me to argue with. However, it is much easier for me to choose applications I'm familiar with, and if the applications I was familiar with did not conform to my point, I wouldn't believe it (although I may merely be misinterpreting them). More importantly, if you merely mention an application or post a textbook, it puts the ball in my court to construct the argument for you. I'm not saying you're lazy, but I am saying this thread would likely progress much more rapidly if you were to construct more thorough arguments around your evidence.

Secondly, I have been accused of waving my hands and not really providing concrete arguments. This is duly noted and I have consistently attempted to increase the rigor (...ha!) of my arguments with each post. However, I have not observed many concrete arguments from my (admittedly numerous) foes. Mostly I am told "if you only read this textbook" or "surely this must be the case", which may very well be true, but it is extremely challenging for me to read every paper, extract the argument you imply with said paper, and then respond to it.

Finally, I think we should concentrate on one of these topics at a time. Either it will constitute evidence that the mathematician's theories are very helpful and the thread will die a peaceful death, or it will not, and we will proceed onto the next application.

I would prefer we begin with algebraic topology as applied to protein structure since I presently work in a computational biophysics lab and have been pondering more theoretical approaches to the problem of protein conformational change for several years now. The problem space appears to admit itself very well to a geometric or topological approach, yet protein conformational change prediction or first principles predictions of protein folds are extremely challenging unsolved physics problems (some colleagues of mine are currently engaged in CASP, a refinement/prediction challenge and advanced mathematical trickery which gave them an edge would certainly be interesting ;)). I have repeatedly explored more esoteric approaches and have been unimpressed.

The laboratory in which I work (surprise surprise) relies heavily on brute force, running molecular dynamics simulations on protein systems where the trajectories for every atom are simulated, although I work on algorithmic/more theoretical approaches. What is interesting to me is just how far out of our reach conformational change actually is; just obtaining a microsecond of simulation, significantly below the timescales for full conformational change, can take several months.

So what I would like to know is, what are these approaches, what pure mathematics do they rely upon,and how do they perform?
 
Last edited by a moderator:
  • Like
Likes 1 person
Back
Top