Do physics books butcher the math?

In summary: I know you don't think much of mathematicians and mathematical theory. You are satisfied with knowing you can predict everything. However, you cannot deny that making a theory mathematically rigorous is something that humans should attempt to do. It is in our nature to understand the theory as well as we can, and a nonrigorous theory would not be as well understood as a rigorous one. The rigorization of a theory might not yield any applications, but I think it is wrong to do science only with the applications in mind. One should do it to try and understand nature better.
  • #211
jostpuur said:
One of the biggest problems with physicists' bad math is that it attracts wrong kind of people.

When a scientific community insists that explanations and claims must be logical, it serves as a sieve that filters out those who are capable of only babbling nonsense. The policy of physicists to allow nonsensical pseudomathematical carbage under the pretense of intuition has had the consequence that the sieve isn't working. Wrong kind of people get into the community and corrupt it from inside.

Some people defend the bad math with argument that it hasn't caused any harm. They might demand evidence that some harm has been done. Well it is the job of future historians to study what harm the modern pseudomathematical culture has produced. I wouldn't be surprised if the mandkind could already have achieved warm superconductors and fusion energy if only physicists had not declared war on mathematics.

I'm not sure I understand the problem you are talking about. What kind of people in the scientific community are guilty of "babbling nonsense" because of their bad math in particular?
 
Physics news on Phys.org
  • #212
How about Shannon's proof that the error can go to zero in a noisy channel? That came before its applications.
 
  • #213
Arsenic&Lace said:
rubi: So ANSYS hires Applied mathematicians! Great. Applied mathematics departments seem to not always require their students to take pure math courses, at least in the random sample I looked in, where some schools had no pure math requirements (that I could see), some schools required 1/8 courses be pure math, and other schools required more. As I said I don't really have a beef with applied mathematicians, but notice how they don't want an MS in PURE mathematics! Also notice how a Computer Science or Engineering major would be completely acceptable; individuals who have probably never seen the inside of a real analysis textbook.
Obviously, you don't know what an applied mathematician is. Rigorous numerical PDE's is exatcly what they do (among other rigorous things that ANSYS wouldn't need). They don't want a pure mathematician, because pure mathematicians don't study numerics.

Here's the difference between you and a usual engineer:
Usual engineer: "I need to solve this PDE. Thankfully, there is lots of math literature on it, so I can quickly find the most suitable method for my needs."
Arsenic&Lace: "I need to solve this PDE. But I just can't accept that mathematics could be useful. So I will rather spend millions of dollars on performing meaningless computations."
If I would have to guess, I would say it's not you, who would get the job at ANSYS. :tongue:

As for the LAPACK bibliography, well, all of the citations are from computational or applied mathematics journals, or applied books. Maybe there's one which sneaked past me when I skimmed it, but I didn't see, for instance, a citation from the AMS or a journal on pure PDE theory. Perhaps you share Joriss' confusion about my stance. Applied/computational mathematics departments generally seem productive and don't get my goat. There are varying levels of rigor in comp/applied departments so the jury is still out as to whether or not it is prevalent or important in FEA.
So you don't consider estimates as can be found in the LAPACK manual rigorous? Again, you don't know what you're talking about. They cite the following paper: http://www-sop.inria.fr/nachos/phyleas/docs/cea-edf-inria09/MArioli_pap1.pdf
Paper said:
[...] We have shown that, when the iterative refinement is converging, it is possible and inexpensive to guarantee solutions of sparse linear systems that are exact solutions of a nearby system whose matrix has the same sparsity structure. Thus we have answered the open problem posed by Duff, Erisman, and Reid (1986, p. 276) concerning obtaining bounded perturbations while maintaining sparsity. If the equations arise from the discretization of a partial differential equation, then a componentwise tiny error should indicate that the solution obtained is that of a neighbouring partial differential equation, a conclusion that would not be available if classical error bounds were being used. [...]
So LAPACK clearly relies on rigorous estimates.
 
Last edited:
  • #214
Arsenic&Lace said:
What physics does this model? Physics on a lattice? I remember reading an interesting paper on lattice models of spacetime, where strange things happened to the uncertainty principle because of the lattice. I will post the paper if you are interested.

But I contend that unless discrete position/momentum operators actually model something interesting, this problem would never cross my desk. If it did, it would probably go the way of the paper in reference and work out just fine, but not in the framework you described, because we must make different physical assumptions when working on a lattice.

The point (that you perhaps willfully miss) is that the generalization from finite to infinite dimensional vector spaces is not an intuitive one and that one should rely on rigorous mathematics to ensure everything is consistent. Position and momentum are not the only canonically conjugate variables one may consider, but since perhaps they are the only ones you have heard of, they must be the only ones anyone must consider. My point with that example was to give you a counterexample of where finite dimensional vector spaces do not compare with infinite dimensional ones in an intuitive way.

But, of course, you cherry pick your way around my post (as well as others) and weasel your way into arguing a position that you must believe suits your viewpoint. I think that you know you are wrong but refuse to lose face by admitting it. Your arguments have changed multiple times as you shift your stated objections around. Rather than being open to any viewpoint and letting the data guide you, you have your ideas set in stone and evidence be damned to the contrary. This is an intellectually dishonest way of arguing. You also have a prickly way of responding to others which is extremely off-putting. I hope you don't interact with your colleagues in the way you have responded to others in this thread.
 
  • #215
ZombieFeynman said:
The point (that you perhaps willfully miss) is that the generalization from finite to infinite dimensional vector spaces is not an intuitive one and that one should rely on rigorous mathematics to ensure everything is consistent. Position and momentum are not the only canonically conjugate variables one may consider, but since perhaps they are the only ones you have heard of, they must be the only ones anyone must consider. My point with that example was to give you a counterexample of where finite dimensional vector spaces do not compare with infinite dimensional ones in an intuitive way.

That reminds me. Arsenic&Lace, please find the mistake in the following post without using rigorous mathematics:

George Jones said:
Suppose [itex]A[/itex] is an observable, i.e., a self-adjoint operator, with real eigenvalue [itex]a[/itex] and normalized eigenket [itex] \left| a \right>[/itex]. In other words,

[tex]A \left| a \right> = a \left| a \right>, \hspace{.5 in} \left< a | a \right> = 1.[/tex]

Suppose further that [itex]A[/itex] and [itex]B[/itex] are canonically conjugate observables, so

[tex] \left[ A , B \right] = i \hbar I,[/tex]

where [itex]I[/itex] is the identity operator. Compute, with respect to [itex]\left| a \right>[/itex], the matrix elements of this equation divided by [itex]i \hbar[/itex]:

[tex]
\begin{equation*}
\begin{split}
\frac{1}{i \hbar} \left< a | \left[ A , B \right] | a \right> &= \left< a | I | a \right>\\
\frac{1}{i \hbar} \left( \left< a | AB | a \right> - \left<a | BA | a \right> \right) &= <a|a>.
\end{split}
\end{equation*}
[/tex]

In the first term, let [itex]A[/itex] act on the bra; in the second, let [itex]A[/itex] act on the ket:

[tex]\frac{1}{i \hbar} \left( a \left< a | B | a \right> - a \left<a | B | a \right> \right)= <a|a>.[/tex]

Thus,

[tex]0 = 1.[/tex]

This is my favourite "proof" of the well-known equation [itex]0 = 1[/itex].

What gives?

A nice collection of such subtleties can be found here: http://arxiv.org/pdf/quant-ph/9907069v2.pdf
 
Last edited:
  • Like
Likes 1 person
  • #216
jostpuur said:
One of the biggest problems with physicists' bad math is that it attracts wrong kind of people.

When a scientific community insists that explanations and claims must be logical, it serves as a sieve that filters out those who are capable of only babbling nonsense. The policy of physicists to allow nonsensical pseudomathematical carbage under the pretense of intuition has had the consequence that the sieve isn't working. Wrong kind of people get into the community and corrupt it from inside.

Some people defend the bad math with argument that it hasn't caused any harm. They might demand evidence that some harm has been done. Well it is the job of future historians to study what harm the modern pseudomathematical culture has produced. I wouldn't be surprised if the mandkind could already have achieved warm superconductors and fusion energy if only physicists had not declared war on mathematics.

I wouldn't be surprised if if we had insisted on mathematical rigor in every physicist's results, we might still be stuck back in the middle ages.

See how unverifiable statements work? You make one, and I can make an opposite one. :D
 
  • #217
Matterwave said:
I wouldn't be surprised if if we had insisted on mathematical rigor in every physicist's results, we might still be stuck back in the middle ages.

See how unverifiable statements work? You make one, and I can make an opposite one. :D

And somehow, I think yours is closer to the truth.
 
  • #218
I know much less math and physics than most on this thread, but seeing as no one has mentioned cryptography, which is well-known for using number theory and abstract algebra, I thought I'd chime in. It's not quite physics, but here mathematics is used that was previously deemed perfectly useless (as in Hardy's A Mathematician's Apology) and probably would have been a target of Arsenic&Lace's scorn if it wasn't central to cryptographic algorithms like RSA or Diffie-Hellman. To Arsenic&Lace's point that abstractness and too much generalization are what harms the utility of mathematics, the opposite is in fact seems to be remarkably true in cryptography: if in Diffie-Hellman one uses uses an elliptic curve group instead of the obvious multiplicative group [itex] \mathbb{Z}_N^* [/itex], (where [itex] N [/itex] is a product of two primes close in size), efficiency is actually improved.
 
  • #219
Matterwave and micromass, you appear to be unaware of the level of badness in physicists' math. It is extremely common that physicists merely babble technical nonsense with no interest to the truth values of their statements, and such activity belongs to the same category as gender studies and other postmodern carbage. That means that the physicists are guilty of similar stuff that was the target of the Sokal Hoax for example.

The analogy to medieval times cannot be anything else than that

"modern physicists' math" is like "astrology"

and

"modern mathematicians' math" is like "medieval physicists' math".
 
  • #220
Most of this discussion seems to be about whether we should use only rigorous methods, or only non-rigorous methods. I find both ideas pretty silly. The way I see it, mathematical discovery is a 2-step process:

1. Guess what definitions will be useful and what statements will turn out to be theorems.
2. Write down the definitions and use them to find out which of the conjectures are theorems and which ones are not.

It's of course perfectly OK to use non-rigorous methods in step 1.

I came across a simple example of how non-rigorous and rigorous methods can work together a few weeks ago. A book said that if 1-ab is invertible, then so is 1-ba, and the inverse is given by ##(1-ba)^{-1}=1+bca##, where ##c=(1-ab)^{-1}##. It's easy to verify (rigorously) that this is true:
$$(1-ba)(1+bca)= 1-ba+bca-babca =1-ba+b(1-ab)ca =1-ba+ba=1.$$ But I still felt confused, because how do you even think of trying 1+bca? Another book gave me the answer. You just apply the formula for a geometric series in a naive way, and then rearrange some stuff:
$$(1-ba)^{-1}=\sum_{n=0}^\infty (ba)^n =1+ba+baba+bababa+\cdots =1+b(1+ab+abab+\cdots)a =1+b(1-ab)^{-1}a.$$ These two series expansions are valid when ##\|ab\|<1##, and ##\|ba\|<1##, but the first calculation we did shows that the result holds even when one or both of these conditions are not satisfied.

It seems very likely that this is how the theorem was discovered. I obviously don't have a problem with this. This isn't "butchering". I think this is both the best way to do math, and the best way to teach it.
 
  • #221
jostpuur said:
Matterwave and micromass, you appear to be unaware of the level of badness in physicists' math. It is extremely common that physicists merely babble technical nonsense with no interest to the truth values of their statements, and such activity belongs to the same category as gender studies and other postmodern carbage. That means that the physicists are guilty of similar stuff that was the target of the Sokal Hoax for example.

The analogy to medieval times cannot be anything else than that

"modern physicists' math" is like "astrology"

and

"modern mathematicians' math" is like "medieval physicists' math".
While I think that the analogy is exaggerated (because of what I said in my previous post), I agree with what you said at the start. I've seen textbooks with definitions and "theorems" that are impossible to understand because the presentation is too dumbed down. I've seen articles that are pretty much just word salad. (I'm thinking specifically about one of those articles about a "disproof" of Bell's theorem. It probably wasn't published, but it was written by a guy with physics training). I've seen articles that are still being referenced after 40 years, with some very weak arguments in them. The long and very confused discussion about the so-called PBR "theorem" we had at PF would have been a lot shorter and much less confused if the authors had been able to actually prove a theorem.
 
  • #222
It is true that non-rigorous intuitive stuff is important, but it is also true that physicists have started to use the argument of intuitiveness as a camouflage to cover anything. People have failed to understand that not everything, that is advertised as intuitive, is intuitive in the end.

Some of the stuff with infinitesimals, Lie groups, independent complex conjugates, non-defined Grassmann numbers and so on.. is so amazing that it is only a matter of time when a Sokal Hoax eventually hits the theoretical physicists themselves.

Btw, I just recalled a down to Earth example: I remember an incident where I (a mathematician IMO) stated that if an electric current between two points of different electric potential is zero, then the resistance between the points is infinite. A practising physicists (with poor understanding of math IMO) complained that I don't know what I'm doing because you can't divide with zero. The pattern is clear: Better understanding of mathematics leads to a better ability to apply the math in an intuitive way. Worse understanding to the opposite.

In fact the modern belief, that worse understanding of math would lead to better ability to apply it, is so totally without any evidence, that the belief should be considered a pseudoscientific belief.
 
Last edited:
  • #223
Arsenic&Lace said:
It doesn't. I'm fine with it being art.

I think this is key to your particular mental block. Pure mathematics isn't just about creating art, like "isn't this pretty." It is about creating new connections and contributing to the field as a whole. As I've mentioned before, some mathematics has applications WITHIN mathematics. As in "we've found that solving problem X is equivalent to solving problem Y, and the method used to solve X is better." I'm sure more mathematically literate people than me can come up with a lot of examples.

As mentioned, sometimes physical or practical applications for mathematics is discovered well after the mathematics (as in the case of number theory). But that isn't even the point. The point is to have a large number of people creating mathematics, and filling in all the little niches, and making it freely available to each other as well as people outside the field.

Perhaps it takes 100 mathematicians creating stuff for 1 of them to find something which has some application. It doesn't follow that the other 99 were doing something useless. That is just how the discovery unfolds.

Not to mention the important relationship between pure math research and education, and the very important discipline of critical analysis and rigor.

-Dave K
 
  • #224
This is off topic conserning the thread, but I find it peculiar that I had been wondering about the same result as Fredrik some time ago (somewhere during the past 12 months, I don't remember when precisely anymore). I had read an article where the Woodbury matrix identity (Wikipedia) had been used so that it was not explicitly stated. After lengthy struggles I eventually discovered a way to prove the result using the series like Fredrik showed. For a moment I thought that I had discovered a mistake in the article, because the authors had not taken into account the needed conditions for the series to converge.

Only after contemplating even more I realized that the convergence of the series was not needed, because the result could be extended using complex analytic continuation. That means we fix other elements of matrices, and consider some element [itex]a_{ij}[/itex] as a free complex variable. If the both sides of some equation are complex analytic functions of [itex]a_{ij}[/itex], then equality in some region will extend to equality everywhere else too.

Only after going through more cited papers I found out that the authors had only considered the Woodbory matrix identity as "obvious" and "well-known". Then I had no other option but to abandond my own slightly unneccesarily complicated proof.
 
  • #225
jostpuur said:
Matterwave and micromass, you appear to be unaware of the level of badness in physicists' math. It is extremely common that physicists merely babble technical nonsense with no interest to the truth values of their statements, and such activity belongs to the same category as gender studies and other postmodern carbage. That means that the physicists are guilty of similar stuff that was the target of the Sokal Hoax for example.

While I agree with the premise of your statements, and particularly your comment regarding gender studies, I think youre about as extremist as Arsenic and Lace but in the opposite side of the spectrum. The only time I've ever had a problem with the babbling of technical nonsense is in QFT books like Maggiore and Schwartz.

I'll say it again, there's a difference between being precise with mathematical statements in clear prose and actually knowing and applying pure math to physics. The latter is absolutely useless to a physicist. You can idle by learning all you want about the mathematical structure of physical theories but that doesn't mean you know anything about the actual physics. It is hard to pretend a knowledge of the math behind physics equates to a deep knowledge of the physics itself. Intuition is far more important. At the end of the day physicists arent going to care how much time you spent learning all the mathematical subtelties of a theory. That is all academic. They want to know if you can actually solve physics problems and a knowledge of pure math is quite useless for that. Mind you I mean actual physics and not nonsense like QM philosophy.

Is pure math necessary? Obviously yes. Does a physicist need to know it? Certainly not. My research advisor is nothing short of brilliant and his physical intuition blows my mind. He always points out details to me just by intuition that I later find to be true by direct calculation. He isn't hiding behind intuition he just knows how to use it well. If he wanted he could easily learn whatever pure math is relevant and verify everything meticulously. But who the hell has the time or even impetus for that? It is pointless. Garnering a fine tuned intuition for physics calculations is much harder than just reading books, learning whatever pure math, and setting about to formulate everything you read or do in a rigorous framework. Not to mention again physicists really aren't going to care if you can do the latter, it will not help you solve publishable problems because you arent doing physics.

You really are Aresenic and Lace's devil's advocate taken to the extreme not just based on your comments here but based on your other threads. You seem to think knowing the mathematical structure of a physicsl theory equates to knowing the physics. It would be fun to pit you head to head with Arsenic and Lace, a duel of two extremes if you will.
 
Last edited:
  • #226
Fredrik said:
Most of this discussion seems to be about whether we should use only rigorous methods, or only non-rigorous methods. I find both ideas pretty silly. The way I see it, mathematical discovery is a 2-step process:

1. Guess what definitions will be useful and what statements will turn out to be theorems.
2. Write down the definitions and use them to find out which of the conjectures are theorems and which ones are not.

It's of course perfectly OK to use non-rigorous methods in step 1.

I came across a simple example of how non-rigorous and rigorous methods can work together a few weeks ago. A book said that if 1-ab is invertible, then so is 1-ba, and the inverse is given by ##(1-ba)^{-1}=1+bca##, where ##c=(1-ab)^{-1}##. It's easy to verify (rigorously) that this is true:
$$(1-ba)(1+bca)= 1-ba+bca-babca =1-ba+b(1-ab)ca =1-ba+ba=1.$$ But I still felt confused, because how do you even think of trying 1+bca? Another book gave me the answer. You just apply the formula for a geometric series in a naive way, and then rearrange some stuff:
$$(1-ba)^{-1}=\sum_{n=0}^\infty (ba)^n =1+ba+baba+bababa+\cdots =1+b(1+ab+abab+\cdots)a =1+b(1-ab)^{-1}a.$$ These two series expansions are valid when ##\|ab\|<1##, and ##\|ba\|<1##, but the first calculation we did shows that the result holds even when one or both of these conditions are not satisfied.

It seems very likely that this is how the theorem was discovered. I obviously don't have a problem with this. This isn't "butchering". I think this is both the best way to do math, and the best way to teach it.

This is a very nice post because it shows how totally nonrigorous arguments are useful in math anyway. People who don't have as much experience with math don't realize this. They think that you have a statement that you need to prove and you need to provide the steps inbetween. This is of course true, but it is important to have a broad perspective here. You should often "think outside the box". Do some nonrigorous things, try to find some concrete examples, etc. The process is often very nonlinear. The ultimate proof might be one line long (like in the post I quoted), but the steps to find the proof might be a lot longer.

Furthermore, when you discover a new theorem or theory, then the way you do it is usually totally different from how it's presented in math books. First you will likely find concrete examples. Then you might find a nonrigorous proof of the theorem. Then you might be able to formalize it. In either way, finding the right axioms and definitions comes at the end and is only useful for presenting your theory. It is presented completely the other way around of course: the axioms and definitions come first, then the main theorem and then the concrete examples. This is a very neat and efficient approach, but don't think that things are actually done this way.

dkotschessaa said:
Perhaps it takes 100 mathematicians creating stuff for 1 of them to find something which has some application. It doesn't follow that the other 99 were doing something useless. That is just how the discovery unfolds.

Right, this is another thing that many people don't realize. For every useful discovery, then are hundreds other papers which are completely useless. You might get the idea then that mathematicians don't do anything useful, which is a wrong impression. Then again, I don't really doubt that it's different in physics or chemistry or anywhere else.
 
Last edited:
  • #227
I agree on what micromass explains above. I have defended those views myself, and sometimes I have been frustrated because others don't understand these basic things.

But do you believe that you can defend the physicists' math policy with those points? The physicists have a policy that if some result can be proven both right and wrong way, it will be proven the wrong way even if it doesn't come with any advantages.

I'll give you an example: Sometimes the coordinates of some particle can be written in two alternative ways. Either as [itex](x(t),y(t))[/itex], where both [itex]x(t)[/itex] and [itex]y(t)[/itex] are real, or as [itex]z(t)[/itex] where this is a complex variable. It turns out that there are two ways to obtain the same time evolution. You can assume that [itex]x(t)[/itex] and [itex]y(t)[/itex] are independent, or alternatively you can assume that [itex]z(t)[/itex] and [itex]z^*(t)[/itex] are independent. The first way is correct, because [itex]x(t)[/itex] and [itex]y(t)[/itex] in fact are independent coordinates. The second way is incorrect, because [itex]z(t)[/itex] and [itex]z^*(t)[/itex] are not independent coordinates. They uniquely determine each other. However! If you "assume" that the complex coordinate and its conjugate are independent, or if you "treat" them as independent, you can still obtain correct results. There is nothing intuitive in the assumption or the treatment though, and nobody has any clue of what it means that a complex coordinate and its conjugate would be independent.

Consider these facts:

A result has been known for more than 100 years.

The result can be proven correctly in an easy and intuitive way.

The result can also be proven incorrectly in a more difficult and incomprehensible way.

The physicists today insist on proving the result incorrectly in a more difficult and incomprehensible way, and they defend the choice with the argument of intuition.

How do you defend that? Are you going to lecture me on how discovery happens in a different way than proving? Or remind me of the fact that Newton's math wasn't as rigorous as modern math either?
 
  • #228
micromass said:
Right, this is another thing that many people don't realize. For every useful discovery, then are hundreds other papers which are completely useless. You might get the idea then that mathematicians don't do anything useful, which is a wrong impression. Then again, I don't really doubt that it's different in physics or chemistry or anywhere else.

Yes, a point which was made earlier. There is plenty of "useless" physics out there. I think what's important is the *practice* of science and scientific thinking.

Also, I think we develop different sorts of minds depending on which areas we work in, or vice versa. Some mathematicians just aren't cut out for applied work, because it sometimes demands an unbearable amount of approximation and non-rigorous work. People should stick to what they are good at, what they like, and contribute as much to that area as they can.

-Dave K.
 
  • #229
I'm too busy to react with detailed replies, but I miss this thread so I'm going to try and continue it by just asking questions. I'll even try to be humble for once :wink:

dkotschessaa said:
Yes, a point which was made earlier. There is plenty of "useless" physics out there. I think what's important is the *practice* of science and scientific thinking.
Do you believe that a mathematical structure, such as a topological space, is on the same ontological footing as a physical object or phenomenon, such as an electron or the spin quantum Hall effect?

The physicists today insist on proving the result incorrectly in a more difficult and incomprehensible way, and they defend the choice with the argument of intuition.
Why would a physicist insist on proving something incorrectly in a more difficult and incomprehensible way? What constitutes an "incorrect proof"? What advantages to you think physicists think there are to such an approach (even if you think these are not actually advantages!)?

To Arsenic&Lace's point that abstractness and too much generalization are what harms the utility of mathematics, the opposite is in fact seems to be remarkably true in cryptography: if in Diffie-Hellman one uses uses an elliptic curve group instead of the obvious multiplicative group Z∗N, (where N is a product of two primes close in size), efficiency is actually improved.
Can you elaborate on this?

That reminds me. Arsenic&Lace, please find the mistake in the following post without using rigorous mathematics:
Can you think of a reason why a physicist might be interested in the rules of operators in general? What operators, other than momentum and position, have a commutator which is a multiple of the identity (I'm not saying there aren't any, I just can't think of any off of the top of my head in 5 minutes)? Is it still unclear what will happen if you substitute specific, familiar operators into this argument (i.e. momentum and position)?

The point (that you perhaps willfully miss) is that the generalization from finite to infinite dimensional vector spaces is not an intuitive one and that one should rely on rigorous mathematics to ensure everything is consistent. Position and momentum are not the only canonically conjugate variables one may consider, but since perhaps they are the only ones you have heard of, they must be the only ones anyone must consider.
Can you give an example where a physicist might need to be concerned with the details regarding the transition from finite to infinite dimensional vector spaces? It does not need to be the particular concern you are referring to.

I believe there are other canonically conjugate operators, but could not think of any off the top of my head. As an aside, what are some physically important ones?

EDIT: One extra question for micromass/Zombiefeynman: The most advanced course I have taken in quantum mechanics was a graduate course at the level of Sakurai. My hazy memory of the textbook is that it did not discuss such mathematical questions as what happens when you bounce from finite to infinite dimensional vector spaces. Of course this could be totally false but I honestly have no recollection of such details being discussed. Why would the standard textbook ignore such details, if they are important to physicists?
 
Last edited by a moderator:
  • #230
I'll reply to some of your questions if though they're not all directed at me:

Arsenic&Lace said:
Do you believe that a mathematical structure, such as a topological space, is on the same ontological footing as a physical object or phenomenon, such as an electron or the spin quantum Hall effect?

No. However, I see an electron also has a mathematical abstraction of a real world phenomenon. The current theories of the electron are merely approximations and therefore not necessarily reality.

Can you think of a reason why a physicist might be interested in the rules of operators in general? What operators, other than momentum and position, have a commutator which is a multiple of the identity (I'm not saying there aren't any, I just can't think of any off of the top of my head in 5 minutes)?

Well, you won't be able to answer this question without pure math :tongue: But anyway: http://en.wikipedia.org/wiki/Stone–von_Neumann_theorem

Is it still unclear what will happen if you substitute specific, familiar operators into this argument (i.e. momentum and position)?

Not to me. Is it to you?

Also, what did you think about the paper I linked on mathematical surprises.

Can you give an example where a physicist might need to be concerned with the details regarding the transition from finite to infinite dimensional vector spaces?

I think it is obvious that you want some general rules concerning the spaces and operators you work with. Even for physicists, such general rules should be of immense importance. For example, a physicist also cares about rules like

[tex](g\circ f)^\prime (x) = f^\prime(x)g^\prime(f(x))[/tex]

even if not all functions ##g## and ##f## are physical or important.
 
  • #231
Arsenic&Lace said:
Do you believe that a mathematical structure, such as a topological space, is on the same ontological footing as a physical object or phenomenon, such as an electron or the spin quantum Hall effect?

Yes! Even if one goes all the way back to ZFC, ZFC cannot be defined without non-rigourous language.
 
  • #232
Arsenic&Lace said:
Do you believe that a mathematical structure, such as a topological space, is on the same ontological footing as a physical object or phenomenon, such as an electron or the spin quantum Hall effect?

Getting into ontology of mathematical objects stuff might be beyond the the scope of this thread, and I'm not sure I'm qualified to answer that. Howeer, I like the Quine-Putnam indispensability argument (http://plato.stanford.edu/entries/mathphil-indis/) though I'm not completely convinced by it.

My original point though wasn't about mathematical structures, but the *practice* of mathematics. Sometimes the purpose of mathematical research is to support other mathematical research. I think you'd have a better grasp of the purpose of this research if you looked at the entire field as an entity, rather than pick out the bits that you don't think are useful.

Not meaning to get personal, and correct me if I'm mistaken, but, didn't you say you actually enjoyed mathematics or even preferred it? Why wouldn't that be a reason? It sounds like you are finding reasons not to pursue it.


-Dave K
 
  • #233
Arsenic&Lace said:
I believe there are other canonically conjugate operators, but could not think of any off the top of my head. As an aside, what are some physically important ones?

Sure. Wavefunction phase and particle number. Angular momentum and angular orientation. Vector potential and current density. Electric potential and charge. Energy and time. There are more. But the ones above were probably covered in your "graduate level course." If they weren't, you should be refunded your tuition.
 
Last edited by a moderator:
  • #234
Since this thread has degenerated a lot since the beginning, I am locking it. Any further discussion on for example mathematics and physics questions can be dealt with in new threads.
 
  • Like
Likes 1 person

Similar threads

Replies
3
Views
348
Replies
12
Views
1K
  • STEM Academic Advising
Replies
1
Views
905
  • General Discussion
3
Replies
99
Views
6K
Replies
5
Views
980
  • General Discussion
Replies
6
Views
2K
  • Science and Math Textbooks
Replies
10
Views
1K
Replies
7
Views
3K
  • Science and Math Textbooks
Replies
6
Views
991
  • Science and Math Textbooks
Replies
4
Views
3K
Back
Top