Do physics books butcher the math?

In summary: I know you don't think much of mathematicians and mathematical theory. You are satisfied with knowing you can predict everything. However, you cannot deny that making a theory mathematically rigorous is something that humans should attempt to do. It is in our nature to understand the theory as well as we can, and a nonrigorous theory would not be as well understood as a rigorous one. The rigorization of a theory might not yield any applications, but I think it is wrong to do science only with the applications in mind. One should do it to try and understand nature better.
  • #141
atyy said:
I'm hardly an expert, since I am the true non-rigourous guy here, not you. But roughly, there are two different sorts of topology in condensed matter physics.

(1) There is the topology of the integer quantum hall effect, involving Chern numbers. Topological insulators are generalizations of this idea.
http://www.physics.upenn.edu/~kane/pedagogical/WindsorLec2.pdf
http://physics.princeton.edu/~haldane/talks/dirac.pdf
http://www.bioee.ee.columbia.edu/downloads/2013/nature12186.pdf

(2) Then there is the topology of the fractional quantum hall effect, one sign of which is that the ground state degeneracy depends on the topology on which the Hamiltonian is placed. A proposed use of this sort of topology is in Kitaev's topological quantum computation.
http://stationq.cnsi.ucsb.edu/~freedman/publications/96.pdf
http://www.simonsfoundation.org/quanta/20140515-forging-a-qubit-to-rule-them-all/

From the Haldane slides above:
"The moral of this long story: suggests three distinct ingredients for success.
• Profound, correct, but perhaps opaque formal topological results (Invariants, braid group, etc)
• Profound, simple and transparent “toy models” that can be explicitly treated (The honeycomb Chern Insulator, the Kitaev Majorana chain, etc)
• Understanding the real materials needed for “realistic” (but more complex) experimentally achievable systems that can bring “toy model results” to life in the hands of experimentalist colleagues."

This is strictly not all of the topology that lies in modern Condensed Matter Physics. The QH effect and its Chern number is rather different than Topological Insulators and the Z2 topological QSH effect in 2D. This is generalized to a whole family of 3D Topological Materials. See the excellent review by Qi and Zhang.

I truly don't mean to quibble but this is an extremely exciting area of physics to me!

The points made by Haldane above are brought together in a very harmonious way in the original BHZ paper I cited above.

EDIT: Perhaps at a very rough approximation, I agree with your division.
 
Last edited:
Physics news on Phys.org
  • #142
ZombieFeynman said:
This is strictly not all of the topology that lies in modern Condensed Matter Physics. The QH effect and its Chern number is rather different than Topological Insulators and the Z2 topological QSH effect in 2D. This is generalized to a whole family of 3D Topological Materials. See the excellent review by Qi and Zhang.

I truly don't mean to quibble but this is an extremely exciting area of physics to me!

The points made by Haldane above are brought together in a very harmonious way in the original BHZ paper I cited above.

EDIT: Perhaps at a very rough approximation, I agree with your division.

Glad to hear your quibbling! I'd love to learn more from someone who's working on it:)

Yeah, the division is very rough, something that Hasan and Kane http://arxiv.org/abs/1002.3895 mentioned.
 
  • #143
ZombieFeynman said:
This is strictly not all of the topology that lies in modern Condensed Matter Physics. The QH effect and its Chern number is rather different than Topological Insulators and the Z2 topological QSH effect in 2D. This is generalized to a whole family of 3D Topological Materials. See the excellent review by Qi and Zhang.

I truly don't mean to quibble but this is an extremely exciting area of physics to me!

The points made by Haldane above are brought together in a very harmonious way in the original BHZ paper I cited above.

EDIT: Perhaps at a very rough approximation, I agree with your division.

Intriguing, my own opinion of the field wasn't based upon experience, I had simply heard from a peer working in quantum computing at IBM that the theorists/experimentalists there generally felt that it was purely academic and impractical. Has anyone attempted to recast it in a more physical light, rather than in terms of formal, obtuse topology? Or is this inefficient/impossible? It was quite a while ago but Feynman's contributions to our understanding of supercooled helium were due to taking a very mathematically convoluted theory from the condensed matter group and trying to make it as simple as possible, in so doing obtaining everything they had and more. But that might not be the case here.

It certainly seems to be the case in modern particle physics, if you look at supersymmetry or string theory.

People have conceived of uses of Cauchy sequences outside of academic math, the banach fixed point theorem for example. Don't get me wrong though, you're wrong for much more fundamental reasons.
How is the Banach fixed point theorem used outside of pure math?

You're just wrong. If you honestly think that rigorous proofs aren't required to establish the truth of mathematical statements, it just shows that you're not intelectually mature enough to understand it, yet. This isn't supposed to be an insult. Many students fail to understand this, when they first learn about it, so you're in good company. You're problem is rather that you have a strong opinion on things that you don't understand and instead of trying to understand them, you're just stubborn. We won't be able to convince you, since it takes years of study to develop the intellectual maturity that it takes to understand the requirement for the rigour in mathematics. Even people, who have been doing mathematics for a long time, get back to their analysis books after years, because they suddenly feel that they have acquired enough mathematical maturity to read them again and new learn things that they hadn't realized when they first read them. So if you claim that you're entitled to judge the necessity of rigour in mathematics, then this is highly questionable, to say the least.
One can make a very suggestive argument for the chain rule by ignoring the mathematician's warning that differentials are not real numbers. Mathematicians claim that this reasoning is wrong because it does not cover pathological cases and does not handle differentials properly. Yet Leibniz was reported to have employed the chain rule long before rigorous proofs could be forumlated.

I am not saying that this demonstrates that rigor is always useless, but I think this debate would end extremely quickly if somebody could find a specific example of where, had it not been for formal mathematical rigor, progress in science or engineering would grind to a halt or follow false paths. Grand claims have been made that theories in physics would be a mess without rigor, but no actual evidence has been presented that this is the case. Indeed, I can even provide evidence to the contrary, given that QFT is still not that mathematically rigorous of a theory (to my knowledge).

This thread is so active that I have missed numerous replies (I think micromass complained that I didn't notice his link on wavelets, which I had to dig to spot), so my apologies if I do not comprehensively reply to everything that is mentioned.
 
  • #144
Arsenic&Lace said:
One can make a very suggestive argument for the chain rule by ignoring the mathematician's warning that differentials are not real numbers. Mathematicians claim that this reasoning is wrong because it does not cover pathological cases and does not handle differentials properly. Yet Leibniz was reported to have employed the chain rule long before rigorous proofs could be forumlated.
This is how math works. Of course, we have conjectures, based on heuristics, before we prove them rigorously. No theorem has ever been proven before it had been conjectured. There surely had been many more proposals for theorems in the history of calculus, but only those remained that could be proven. It's an evolutionary process. (By the way.. The use of differentials isn't wrong in general. Today we understand precisely why they work. See non-standard analysis. We just teach it the ##\epsilon-\delta## way today, because it's easier.)

I am not saying that this demonstrates that rigor is always useless, but I think this debate would end extremely quickly if somebody could find a specific example of where, had it not been for formal mathematical rigor, progress in science or engineering would grind to a halt or follow false paths. Grand claims have been made that theories in physics would be a mess without rigor, but no actual evidence has been presented that this is the case. Indeed, I can even provide evidence to the contrary, given that QFT is still not that mathematically rigorous of a theory (to my knowledge).
There are literally millions of practical methods that could only be developed using rigorous mathematics. Just think about numerical methods for solving partial differential equations for example. It is easy to come up with more examples, but I won't do it, because this is not the point I want to make. (Also, it is very arrogant to think that none such examples exist, only because you aren't aware of them.) Your problem is that you don't acknowledge the fact that even though some formulas might seem to be true heuristically, it's necessary to be able to rely on those formulas. And a formula can only be relied on, if we can be sure that it works and if we know under what circumstances it may fail. Such results can only be established using rigorous mathematics.
 
  • #145
rubi said:
This is how math works. Of course, we have conjectures, based on heuristics, before we prove them rigorously. No theorem has ever been proven before it had been conjectured. There surely had been many more proposals for theorems in the history of calculus, but only those remained that could be proven. It's an evolutionary process. (By the way.. The use of differentials isn't wrong in general. Today we understand precisely why they work. See non-standard analysis. We just teach it the ##\epsilon-\delta## way today, because it's easier.)


There are literally millions of practical methods that could only be developed using rigorous mathematics. Just think about numerical methods for solving partial differential equations for example. It is easy to come up with more examples, but I won't do it, because this is not the point I want to make. (Also, it is very arrogant to think that none such examples exist, only because you aren't aware of them.) Your problem is that you don't acknowledge the fact that even though some formulas might seem to be true heuristically, it's necessary to be able to rely on those formulas. And a formula can only be relied on, if we can be sure that it works and if we know under what circumstances it may fail. Such results can only be established using rigorous mathematics.
I am not assuming that they do not exist! I simply cannot find any. Please, give an example where powerful applied methods clearly rely on rigorous proofs that might be taught in a theory course.
 
  • #146
Let's maintain the productivity of this discussion in the following manner: Wikipedia has a list of numerical methods for PDE's here:
http://en.wikipedia.org/wiki/Numerical_partial_differential_equations

How about we look through them and exhibit where it would not be possible or would be dangerous to use them without extremely rigorous pure mathematics? If I learn nothing else I'll learn lots of numerical methods for solving PDE's :P
 
  • #147
Arsenic&Lace said:
Let's maintain the productivity of this discussion in the following manner: Wikipedia has a list of numerical methods for PDE's here:
http://en.wikipedia.org/wiki/Numerical_partial_differential_equations

How about we look through them and exhibit where it would not be possible or would be dangerous to use them without extremely rigorous pure mathematics? If I learn nothing else I'll learn lots of numerical methods for solving PDE's :P

Are you just trolling at this point? What you're suggesting is about as ridiculous as saying that no evidence is ever necessary in physics, because our current theories seem to work pretty well without it. How on Earth would one ever arrive at the intricacies of the current mathematics used in physics without continually backing the process up rigorously? And that's not mentioning how you would even get started without the growing mathematical generalizations which inspire and allows for new ideas to form.
 
Last edited:
  • #148
Arsenic&Lace said:
I am not assuming that they do not exist! I simply cannot find any. Please, give an example where powerful applied methods clearly rely on rigorous proofs that might be taught in a theory course.
I just gave you an example: Numerics of PDE's. Finite element analysis for example. Micromass has also given you an example, which you ignored. But as I said, that's not the point I want to make. The point I want to make is:
rubi said:
Your problem is that you don't acknowledge the fact that even though some formulas might seem to be true heuristically, it's necessary to be able to rely on those formulas. And a formula can only be relied on, if we can be sure that it works and if we know under what circumstances it may fail. Such results can only be established using rigorous mathematics.
 
  • #149
Arsenic&Lace said:
Let's maintain the productivity of this discussion in the following manner: Wikipedia has a list of numerical methods for PDE's here:
http://en.wikipedia.org/wiki/Numerical_partial_differential_equations

How about we look through them and exhibit where it would not be possible or would be dangerous to use them without extremely rigorous pure mathematics? If I learn nothing else I'll learn lots of numerical methods for solving PDE's :P
How do you think those methods have been developed in the first place? Has there been some genius who just wrote them down? They have been developed by mathematicians over many years, using rigorous mathematics. Distribution theory, ##L^p## spaces and so on.
 
  • #150
I would also point out that asking for examples of where lack of mathematical rigor halting physics is beginning from the wrong end. You should rather ask for how rigorous mathematics helped physical breakthroughs to come by. The classical examples are many, most notably the differential geometry which allowed for general relativity to be expressed and understood., which I believe is already mentioned.
 
  • #151
disregardthat said:
How on Earth would one ever arrive at the intricacies of the current mathematics used in physics without continually backing the process up rigorously? And that's not mentioning how you would even get started without the growing mathematical generalizations which inspire and allows for new ideas to form.

Exactly my point. I don't think there's much more to be said.
 
  • #152
I'm not saying that you don't need to have arguments to support your propositions, that is a caricature of what I am saying which is challenging to agree with if you carefully read my statements. What I am stating is that the levels of rigor are irrelevant and unnecessary.

Finite element analysis was invented/developed by the following individuals (according to wikipedia):
Hrennikoff: Civil engineer
Courant: Applied mathematician
Feng: Electrical engineer/mathematician
Rayleigh: Physicist
Ritz: Physicist
Galerkin: Engineer
Argyris: Civil Engineer
Clough: Structural engineer
Zienkiewicz: Civil engineer
Hinton: Civil engineer
Ciarlet: Pure mathematician

Hrennikoff and Courant built off of the work of Rayleigh, Galerkin and Ritz at the turn of the century. It wasn't until ~50-60 years later (depending on where you state the method began) that it was given a rigorous formulation by Strang and Fix.

Later today I will explore what exactly Courant and Ciarlet contributed to the process; did they use powerful theorems from the pure math department, or were they operating in the same way as the civil engineers and the physicists? If it is the former, and if the former clearly was necessary for progress in the field, then I contend that my mind will change. Since you are an expert on numerical methods in PDE's rubi, do you have a quick answer to this question?
 
  • #153
Arsenic&Lace said:
Intriguing, my own opinion of the field wasn't based upon experience, I had simply heard from a peer working in quantum computing at IBM that the theorists/experimentalists there generally felt that it was purely academic and impractical.

Kitaev's topological quantum computation is probably impractical - but are the theorists there really not excited? Microsoft's quantum computing group has quite a few quantum topologists. Maybe it's IBM Microsoft rivalry:) http://research.microsoft.com/en-US/labs/stationq/researchers.aspx
http://arxiv.org/abs/0707.1889
http://arxiv.org/abs/1003.2856
http://arxiv.org/abs/1307.4403

In the Microsoft group, Nayak's work is physicsy enough that I can understand the gist of it.
 
  • #154
Arsenic&Lace said:
I'm not saying that you don't need to have arguments to support your propositions, that is a caricature of what I am saying which is challenging to agree with if you carefully read my statements. What I am stating is that the levels of rigor are irrelevant and unnecessary.
You seem to be unable to understand my reasoning, so I will repeat it one more time:

1. Mathematics is developed by first having a rough idea about what could end up being a theorem.
2. Only those ideas that can be proved to be working survive.
So if you want to have a point, you would have to prove to me that no proposed method for solving PDE's has ever been withdrawn.

It is totally irrelevant, whether the guy who came up with the idea, had the full general rigorous theory in mind, right from the start. Mathematical methods are developed and generalized over years. Even if you have a heuristic method for solving PDE's, it's necessary to know, whether it really converges and how fast it converges (computing power is limited) and whether it is numerically stable (and so on). Show me one such proof that doesn't use rigorous mathematics. You won't find one. All these properties are absolutely essential for applications in engineering. You will be fired instantaneously, if you run non-reliable, slowly converging, numerically unstable algorithms on the supercomputing cluster of your company, because you're wasting their ressources and money.

So here's my concrete challenge: Show me, how we can analyze the speed of convergence of finite elements methods without using rigorous mathematics.


--
Edit: I want to point out that this is the best quote from the thread:
disregardthat said:
What you're suggesting is about as ridiculous as saying that no evidence is ever necessary in physics, because our current theories seem to work pretty well without it.
 
  • #155
Arsenic&Lace said:
Intriguing, my own opinion of the field wasn't based upon experience, I had simply heard from a peer working in quantum computing at IBM that the theorists/experimentalists there generally felt that it was purely academic and impractical. Has anyone attempted to recast it in a more physical light, rather than in terms of formal, obtuse topology? Or is this inefficient/impossible? It was quite a while ago but Feynman's contributions to our understanding of supercooled helium were due to taking a very mathematically convoluted theory from the condensed matter group and trying to make it as simple as possible, in so doing obtaining everything they had and more. But that might not be the case here.

I am not saying that this demonstrates that rigor is always useless, but I think this debate would end extremely quickly if somebody could find a specific example of where, had it not been for formal mathematical rigor, progress in science or engineering would grind to a halt or follow false paths. Grand claims have been made that theories in physics would be a mess without rigor, but no actual evidence has been presented that this is the case. Indeed, I can even provide evidence to the contrary, given that QFT is still not that mathematically rigorous of a theory (to my knowledge).

In regards to the first paragraph above:
Topological insulators are still a long way from practical significance. That does nothing to take away from the interest in them the "fundamental research" point of view. There are topologically DISTINCT states of matter, recently predicted, experimentally confirmed and only now (in the last decade) being explored. The only TRULY SOLVED area is essentially the free electron case. The interplay between strong e-e interactions and spin-orbit coupling is a largely unexplored and extremely exciting (if difficult) area of research (and happens to be my current primary area of interest). Although applications would be wonderful, the primary excitement for me is the exploration not only of a new state of matter but a fundamentally different TYPE of state of matter. That you fail to appreciate this point is troubling.

To your second:
Having mathematical tools that we know are logically self consistent is extremely useful. Not having to ensure that these tools are logically consistent on our own is extremely convenient. I am completely unsure of how you could fail to realize this. I am curious at to where you are at in your physics journey? Are you actively involved in research? What kind? I am somewhat baffled by your responses here.
 
Last edited:
  • #156
So here's my concrete challenge: Show me, how we can analyze the speed of convergence of finite elements methods without using rigorous mathematics.

I mean, I can implement the algorithm in my language of choice (Python or chicken Scheme if I can get away with it, Fortran or C/C++ if I preferred to suffer/needed the performance) and benchmark how much time it takes to converge. If this exceeds my optimization constraints (i.e. if Pointy Haired Boss wants me to have it run in <20 minutes or something) I need to consider implementing a different algorithm or attempting to optimize my existing implementation.

In other words, I could care less if it takes *insert expression here* steps/terms/increments to converge, I only care about the time it takes, a question which can be determined with brute force.

If a mathematician hands me *closed form # of steps expression* that's all well and good, but probably useless given that different architectures, hardware, and languages will muddle any attempts to extract useful information about how long it will take to obtain the precision I need.

If we're at the drawing board and he hands me *expression1* and *expression2* for two different algorithms, it would still be almost certainly easier to just implement algorithm's 1 and 2 and then benchmark them, assuming the first one I tried wasn't quick enough.

In my experience these expressions don't exist. I implemented a Monte Carlo approach to computing perturbation expansions for three-body decays in QED (a triplet pair production reaction, to be precise) last year from scratch and the literature was not very helpful. I've implemented many different algorithms for complex networks/solving SDE's derived from solvent simulations around proteins and apart from complexity classes in the CS papers, we're stuck with straight up brute force benchmarks.

Does this answer your question or do I still not understand it? In short, the answer is that I only care about real time, not number of steps/increments/terms.
 
  • #157
ZombieFeynman said:
In regards to the first paragraph above:
Topological insulators are still a long way from practical significance. That does nothing to take away from the interest in them the "fundamental research" point of view. There are topologically DISTINCT states of matter, recently predicted, experimentally confirmed and only now (in the last decade) being explored. The only TRULY SOLVED area is essentially the free electron case. The interplay between strong e-e interactions and spin-orbit coupling is a largely unexplored and extremely exciting (if difficult) area of research (and happens to be my current primary area of interest). Although applications would be wonderful, the primary excitement for me is the exploration not only of a new state of matter but a fundamentally different TYPE of state of matter. That you fail to appreciate this point is troubling.

To your second:
Having mathematical tools that we know are logically self consistent is extremely useful. Not having to ensure that these tools are logically consistent on our own is extremely convenient. I am completely unsure of how you could fail to realize this. I am curious at to where you are at in your physics journey? Are you actively involved in research? What kind? I am somewhat baffled by your responses here.
For the first paragraph:
Topological matter is very cutting edge stuff. It may be that, much in the way that Feynman made considerable advances by looking for the simplest possible theory, advances in the present field can be made with a similar philosophy. I believe the mathematics should be as complex as it needs to be. If it needs to be as obtuse and difficult as algebraic topology, then so be it. But the jury is probably still out on this point.

For the second:
I am an undergraduate who has been performing (according to my advisor(s)) anyway) PhD level research since the summer of my freshman year.
 
  • #158
Arsenic&Lace said:
I mean, I can implement the algorithm in my language of choice (Python or chicken Scheme if I can get away with it, Fortran or C/C++ if I preferred to suffer/needed the performance) and benchmark how much time it takes to converge. If this exceeds my optimization constraints (i.e. if Pointy Haired Boss wants me to have it run in <20 minutes or something) I need to consider implementing a different algorithm or attempting to optimize my existing implementation.

In other words, I could care less if it takes *insert expression here* steps/terms/increments to converge, I only care about the time it takes, a question which can be determined with brute force.
No, that's wrong. If the algorithm converges fast in one situation, it might be totally inaccurate in another situation with the same number of iterations. No company has the time to test the algorithm for every concrete situation before they use it. That would be pointless. You want to know in advance, which method is better suited for your concrete problem and which method isn't. You don't want to run the algorithm 10 times until you think that the result is close enough to the exact solution. (Of course, you can't know that either without a proof.)

Does this answer your question or do I still not understand it? In short, the answer is that I only care about real time, not number of steps/increments/terms.
Well, it answers the question in the sense that it tells me that you have no idea what you are talking about, if that is what you wanted to know.
 
  • #159
rubi said:
No, that's wrong. If the algorithm converges fast in one situation, it might be totally inaccurate in another situation with the same number of iterations. No company has the time to test the algorithm for every concrete situation before they use it. That would be pointless. You want to know in advance, which method is better suited for your concrete problem and which method isn't. You don't want to run the algorithm 10 times until you think that the result is close enough to the exact solution. (Of course, you can't know that either without a proof.)

Provide a concrete example, otherwise I have no idea if you are merely speculating or not.
 
  • #160
Arsenic&Lace said:
Provide a concrete example, otherwise I have no idea if you are merely speculating or not.
We don't even need a PDE example here (I'm really too lazy to think of one, but i could probably pick any PDE i wanted with a some free parameter that I'd vary). The problem occurs already for ODE's. Simulate a harmonic oscillator with a low frequency and one with a high frequency with the same ##\Delta t## using Euler's method. The higher the frequency, the faster the solution will diverge.
 
  • #161
rubi said:
We don't even need a PDE example here (I'm really too lazy to think of one, but i could probably pick any PDE i wanted with a some free parameter that I'd vary). The problem occurs already for ODE's. Simulate a harmonic oscillator with a low frequency and one with a high frequency with the same ##\Delta t## using Euler's method. The higher the frequency, the faster the solution will diverge.

Right. Or other things to consider: Who says the algorithm will converge at all? Who says the algorithm will converge to the right solution? For example, Newton-Rhapson or fixed point algorithms will not always converge and if they do, they might not give the right solution. Theory is needed to see which is the case.

Or if you want to solve systems of linear equations, who says the very solution you get can be trusted? There are many subtle caveats in these cases where you have ill-conditioned systems. How would you know what ill-conditioned even is without theory?
 
  • #162
micromass said:
Right. Or other things to consider: Who says the algorithm will converge at all? Who says the algorithm will converge to the right solution? For example, Newton-Rhapson or fixed point algorithms will not always converge and if they do, they might not give the right solution. Theory is needed to see which is the case.

Or if you want to solve systems of linear equations, who says the very solution you get can be trusted? There are many subtle caveats in these cases where you have ill-conditioned systems. How would you know what ill-conditioned even is without theory?

There are many properties of matrices and linear operators that one in physics uses without thinking precisely because conscientious mathematicians have meticulously proven many things about them. One needs only to read through Stone and Goldbart's Mathematics for Physics to see many examples of the things we need to prove to make our operators well behaved.

Frankly I think a mixture of naivete and stubbornness is what keeps Arsenic and Lace replying.
 
  • #163
ZombieFeynman said:
One needs only to read through Stone and Goldbart's Mathematics for Physics to see many examples of the things we need to prove to make our operators well behaved.

Awesome, I'll be sure to check out this book since it looks quite good.
 
  • #164
micromass said:
Awesome, I'll be sure to check out this book since it looks quite good.

I think it's the best example of a book which can be somewhat rigorous and yet still be firmly grounded in physics.
 
  • #165
ZombieFeynman said:
The only TRULY SOLVED area is essentially the free electron case. The interplay between strong e-e interactions and spin-orbit coupling is a largely unexplored and extremely exciting (if difficult) area of research (and happens to be my current primary area of interest).

What's the status of symmetry protected topological order? I'd heard it proposed as the concept for the interacting case.
 
  • #166
atyy said:
What's the status of symmetry protected topological order? I'd heard it proposed as the concept for the interacting case.

As far as I'm aware, Xiao-Gang Wen has put out some very nice papers on SPT order in bosonic systems. I must admit, my own focus is quite narrowly in transition metal oxide systems.
 
  • #167
I wonder if a certain someone will change his mind after literally every example of pure mathematics being used in the sciences is misunderstood by him.
 
  • #168
ZombieFeynman said:
As far as I'm aware, Xiao-Gang Wen has put out some very nice papers on SPT order in bosonic systems. I must admit, my own focus is quite narrowly in transition metal oxide systems.

I googled "topological transition metal oxide" and got:
http://arxiv.org/abs/1212.4162
http://arxiv.org/abs/1109.1297

Is it stuff like that you're working on?
 
  • #169
atyy said:
I googled "topological transition metal oxide" and got:
http://arxiv.org/abs/1212.4162
http://arxiv.org/abs/1109.1297

Is it stuff like that you're working on?

At the risk of giving away too many bits of personal information than I'd prefer to, those papers are in very close proximity to my interests.
 
  • Like
Likes 1 person
  • #170
ZombieFeynman said:
At the risk of giving away too many bits of personal information than I'd prefer to, those papers are in very close proximity to my interests.

Ah ha ha, that's cool:)
 
  • #171
I see none of you actually work in computational fields. Earlier in the thread, rubi made statements about how industries somehow rely on brave mathematicians to analyze their algorithms ahead of time to give them theoretical information about convergence so that they can pick the most accurate and timely tools, miraculously ahead of actually using them, without needing to rely on benchmarks. Without surveying the entirety of all industrial output which relies on computations, all I can say is that in my experience, this is grossly inaccurate. The only industry I am intimately familiar with in this regard is that of drug design, where there are a profusion of methods for estimating binding free energies. As I stated previously, the only way to know which is fastest/more accurate and under what circumstances is through brute force benchmarking because the problem is simply too complex. The same is true for finite element analysis, from what I gathered; there is a dominant software package called ANSYS, but it implements multiple convergence algorithms such as hp or XFEM; other packages which implement the same algorithms actually don't perform as well. So in both an additional case and in the case rubi declared I "didn't know what I was talking about" for, it is not possible, merely by studying the structure of the algorithms utilizing powerful mathematics, to make useful predictions about performance. Of course, he is right to say that they don't have the time to benchmark every option they have on the table, but to assume that they rely on pure mathematical theory to avoid this problem is simply untrue; they rely on experiential knowledge.

One needs only to read through Stone and Goldbart's Mathematics for Physics to see many examples of the things we need to prove to make our operators well behaved.
To learn that Hilbert spaces are complete(a completely irrelevant fact)? To prove Parseval's theorem (invented in 1799... put on rigorous foundations more than a century later!)? To learn how to force the delta function to be consistent with the function space framework using the convoluted framework of distributions(in spite of the fact that we can use it perfectly fine for its intended purpose without ever worrying about this)?

Here's a challenge: Find some actual evidence that a). the completeness of Hilbert spaces posed a serious question to physicists at some point, b). doubts about Parseval's theorem posed a serious question to physicists/engineers at some point and c). that the "inconsistencies" of the delta function resulted in spurious results or prevented physicists from actually advancing physics.
 
  • #172
Arsenic&Lace said:
I see none of you actually work in computational fields.

Neither do you, you're just an undergrad. Do you really claim to have such a comprehensive grasp on all computational fields to say what happens and what doesn't happen.

it is not possible, merely by studying the structure of the algorithms utilizing powerful mathematics, to make useful predictions about performance

Are you actually serious or just trolling at this point?

To learn that Hilbert spaces are complete(a completely irrelevant fact)?

I guess you don't know what a Hilbert space is. It's complete by definition. And its completeness is used in QM all the time, although it is usually just swept under the carpet.

Here's a challenge: Find some actual evidence that a). the completeness of Hilbert spaces posed a serious question to physicists at some point, b). doubts about Parseval's theorem posed a serious question to physicists/engineers at some point and c). that the "inconsistencies" of the delta function resulted in spurious results or prevented physicists from actually advancing physics.

Ah, the classic http://en.wikipedia.org/wiki/Straw_man
 
  • #173
micromass said:
Are you actually serious or just trolling at this point?
I guess you don't know what a Hilbert space is. It's complete by definition. And its completeness is used in QM all the time, although it is usually just swept under the carpet.
Ah, the classic http://en.wikipedia.org/wiki/Straw_man
Nope, not trolling; you really can't make useful predictions about performance in the real world using pure mathematics.

The argument I was making regarding Hilbert spaces is that completeness is indeed one of their properties, but that it is a useless property to learn about as a physicist and utterly irrelevant to physical theory.

Nope, not a straw man either, or at least not an intentional one. In general, ZF is stating that rigorous details found in Stone and Goldbart represent useful mathematical definitions or proven theorems for which the level of rigor presented in S&G is necessitated; I contend that this is not the case. For instance, the grotesquely convoluted discussion of the Dirac delta function in the second chapter serves no useful purpose for... anything, really.
 
  • #174
Arsenic&Lace said:
Earlier in the thread, rubi made statements about how industries somehow rely on brave mathematicians to analyze their algorithms ahead of time to give them theoretical information about convergence so that they can pick the most accurate and timely tools, miraculously ahead of actually using them, without needing to rely on benchmarks.
Software like ANSYS just implements algorithms that have been discussed by mathematicians. Of course, they rely on rigorous results proved by mathematicians. They even employ mathematicians. You have to be blind to not see this. Additionally, of course they need to benchmark their software. Software development consists of more than just implementing algorithms. The greatest performance gain is due to the use of efficient algorithms, however. If you use an algorithm of complexity ##O(n^2)## instead of ##O(\log(n))##, then you can optimize as much as you want, it will always be inferior.

...and in the case rubi declared I "didn't know what I was talking about"...
You couldn't even come up with the obvious harmonic oscillator counterexample on your own. I still think you have absolutely no clue what you are talking about.

...but to assume that they rely on pure mathematical theory to avoid this problem is simply untrue; they rely on experiential knowledge.
I never said that they rely purely on mathematics. However, they rely heavily on it.

https://www.youtube.com/watch?v=bIfzyYT1Oho
 
  • #175
Everyone draw your breath in slowly. Ease in, count to 5. Good.

Now exhale slowly, until you feel all the air escape.
________________________________________________

Arsenic, abandon all of your miniscule points of argument. We're debating a broader topic than what you're meandering about. You have actual physicists arguing with, and denying what you're saying. You have actual mathematicians arguing with, and denying what you're saying.

All I ask of you, now, is to reiterate what exactly it is you're arguing against. Because I feel you know it's a lost cause, yet find your only redemption in asking more and more obscured questions, making unreasonable demands of others, until you'll eventually be asking us to explain how the ontological topological heuristics of a non-orthogonal cauchy sequence permeates 3-dimensionally upon a four-sided Mobius strip had any relevance or pertinence in the making of Newton's Laws of Motion.
 

Similar threads

Replies
3
Views
332
Replies
12
Views
1K
  • STEM Academic Advising
Replies
1
Views
890
  • General Discussion
3
Replies
99
Views
6K
Replies
5
Views
957
  • General Discussion
Replies
6
Views
2K
  • Science and Math Textbooks
Replies
10
Views
1K
Replies
7
Views
3K
  • Science and Math Textbooks
Replies
6
Views
982
  • Science and Math Textbooks
Replies
4
Views
3K
Back
Top