Do physics books butcher the math?

  • #226
22,129
3,297
Most of this discussion seems to be about whether we should use only rigorous methods, or only non-rigorous methods. I find both ideas pretty silly. The way I see it, mathematical discovery is a 2-step process:

1. Guess what definitions will be useful and what statements will turn out to be theorems.
2. Write down the definitions and use them to find out which of the conjectures are theorems and which ones are not.

It's of course perfectly OK to use non-rigorous methods in step 1.

I came across a simple example of how non-rigorous and rigorous methods can work together a few weeks ago. A book said that if 1-ab is invertible, then so is 1-ba, and the inverse is given by ##(1-ba)^{-1}=1+bca##, where ##c=(1-ab)^{-1}##. It's easy to verify (rigorously) that this is true:
$$(1-ba)(1+bca)= 1-ba+bca-babca =1-ba+b(1-ab)ca =1-ba+ba=1.$$ But I still felt confused, because how do you even think of trying 1+bca? Another book gave me the answer. You just apply the formula for a geometric series in a naive way, and then rearrange some stuff:
$$(1-ba)^{-1}=\sum_{n=0}^\infty (ba)^n =1+ba+baba+bababa+\cdots =1+b(1+ab+abab+\cdots)a =1+b(1-ab)^{-1}a.$$ These two series expansions are valid when ##\|ab\|<1##, and ##\|ba\|<1##, but the first calculation we did shows that the result holds even when one or both of these conditions are not satisfied.

It seems very likely that this is how the theorem was discovered. I obviously don't have a problem with this. This isn't "butchering". I think this is both the best way to do math, and the best way to teach it.

This is a very nice post because it shows how totally nonrigorous arguments are useful in math anyway. People who don't have as much experience with math don't realize this. They think that you have a statement that you need to prove and you need to provide the steps inbetween. This is of course true, but it is important to have a broad perspective here. You should often "think outside the box". Do some nonrigorous things, try to find some concrete examples, etc. The process is often very nonlinear. The ultimate proof might be one line long (like in the post I quoted), but the steps to find the proof might be a lot longer.

Furthermore, when you discover a new theorem or theory, then the way you do it is usually totally different from how it's presented in math books. First you will likely find concrete examples. Then you might find a nonrigorous proof of the theorem. Then you might be able to formalize it. In either way, finding the right axioms and definitions comes at the end and is only useful for presenting your theory. It is presented completely the other way around of course: the axioms and definitions come first, then the main theorem and then the concrete examples. This is a very neat and efficient approach, but don't think that things are actually done this way.

Perhaps it takes 100 mathematicians creating stuff for 1 of them to find something which has some application. It doesn't follow that the other 99 were doing something useless. That is just how the discovery unfolds.

Right, this is another thing that many people don't realize. For every useful discovery, then are hundreds other papers which are completely useless. You might get the idea then that mathematicians don't do anything useful, which is a wrong impression. Then again, I don't really doubt that it's different in physics or chemistry or anywhere else.
 
Last edited:
  • #227
2,112
18
I agree on what micromass explains above. I have defended those views myself, and sometimes I have been frustrated because others don't understand these basic things.

But do you believe that you can defend the physicists' math policy with those points? The physicists have a policy that if some result can be proven both right and wrong way, it will be proven the wrong way even if it doesn't come with any advantages.

I'll give you an example: Sometimes the coordinates of some particle can be written in two alternative ways. Either as [itex](x(t),y(t))[/itex], where both [itex]x(t)[/itex] and [itex]y(t)[/itex] are real, or as [itex]z(t)[/itex] where this is a complex variable. It turns out that there are two ways to obtain the same time evolution. You can assume that [itex]x(t)[/itex] and [itex]y(t)[/itex] are independent, or alternatively you can assume that [itex]z(t)[/itex] and [itex]z^*(t)[/itex] are independent. The first way is correct, because [itex]x(t)[/itex] and [itex]y(t)[/itex] in fact are independent coordinates. The second way is incorrect, because [itex]z(t)[/itex] and [itex]z^*(t)[/itex] are not independent coordinates. They uniquely determine each other. However! If you "assume" that the complex coordinate and its conjugate are independent, or if you "treat" them as independent, you can still obtain correct results. There is nothing intuitive in the assumption or the treatment though, and nobody has any clue of what it means that a complex coordinate and its conjugate would be independent.

Consider these facts:

A result has been known for more than 100 years.

The result can be proven correctly in an easy and intuitive way.

The result can also be proven incorrectly in a more difficult and incomprehensible way.

The physicists today insist on proving the result incorrectly in a more difficult and incomprehensible way, and they defend the choice with the argument of intuition.

How do you defend that? Are you going to lecture me on how discovery happens in a different way than proving? Or remind me of the fact that Newton's math wasn't as rigorous as modern math either?
 
  • #228
1,049
780
Right, this is another thing that many people don't realize. For every useful discovery, then are hundreds other papers which are completely useless. You might get the idea then that mathematicians don't do anything useful, which is a wrong impression. Then again, I don't really doubt that it's different in physics or chemistry or anywhere else.

Yes, a point which was made earlier. There is plenty of "useless" physics out there. I think what's important is the *practice* of science and scientific thinking.

Also, I think we develop different sorts of minds depending on which areas we work in, or vice versa. Some mathematicians just aren't cut out for applied work, because it sometimes demands an unbearable amount of approximation and non-rigorous work. People should stick to what they are good at, what they like, and contribute as much to that area as they can.

-Dave K.
 
  • #229
533
37
I'm too busy to react with detailed replies, but I miss this thread so I'm going to try and continue it by just asking questions. I'll even try to be humble for once :wink:

Yes, a point which was made earlier. There is plenty of "useless" physics out there. I think what's important is the *practice* of science and scientific thinking.
Do you believe that a mathematical structure, such as a topological space, is on the same ontological footing as a physical object or phenomenon, such as an electron or the spin quantum Hall effect?

The physicists today insist on proving the result incorrectly in a more difficult and incomprehensible way, and they defend the choice with the argument of intuition.
Why would a physicist insist on proving something incorrectly in a more difficult and incomprehensible way? What constitutes an "incorrect proof"? What advantages to you think physicists think there are to such an approach (even if you think these are not actually advantages!)?

To Arsenic&Lace's point that abstractness and too much generalization are what harms the utility of mathematics, the opposite is in fact seems to be remarkably true in cryptography: if in Diffie-Hellman one uses uses an elliptic curve group instead of the obvious multiplicative group Z∗N, (where N is a product of two primes close in size), efficiency is actually improved.
Can you elaborate on this?

That reminds me. Arsenic&Lace, please find the mistake in the following post without using rigorous mathematics:
Can you think of a reason why a physicist might be interested in the rules of operators in general? What operators, other than momentum and position, have a commutator which is a multiple of the identity (I'm not saying there aren't any, I just can't think of any off of the top of my head in 5 minutes)? Is it still unclear what will happen if you substitute specific, familiar operators into this argument (i.e. momentum and position)?

The point (that you perhaps willfully miss) is that the generalization from finite to infinite dimensional vector spaces is not an intuitive one and that one should rely on rigorous mathematics to ensure everything is consistent. Position and momentum are not the only canonically conjugate variables one may consider, but since perhaps they are the only ones you have heard of, they must be the only ones anyone must consider.
Can you give an example where a physicist might need to be concerned with the details regarding the transition from finite to infinite dimensional vector spaces? It does not need to be the particular concern you are referring to.

I believe there are other canonically conjugate operators, but could not think of any off the top of my head. As an aside, what are some physically important ones?

EDIT: One extra question for micromass/Zombiefeynman: The most advanced course I have taken in quantum mechanics was a graduate course at the level of Sakurai. My hazy memory of the textbook is that it did not discuss such mathematical questions as what happens when you bounce from finite to infinite dimensional vector spaces. Of course this could be totally false but I honestly have no recollection of such details being discussed. Why would the standard textbook ignore such details, if they are important to physicists?
 
Last edited by a moderator:
  • #230
22,129
3,297
I'll reply to some of your questions if though they're not all directed at me:

Do you believe that a mathematical structure, such as a topological space, is on the same ontological footing as a physical object or phenomenon, such as an electron or the spin quantum Hall effect?

No. However, I see an electron also has a mathematical abstraction of a real world phenomenon. The current theories of the electron are merely approximations and therefore not necessarily reality.

Can you think of a reason why a physicist might be interested in the rules of operators in general? What operators, other than momentum and position, have a commutator which is a multiple of the identity (I'm not saying there aren't any, I just can't think of any off of the top of my head in 5 minutes)?

Well, you won't be able to answer this question without pure math :tongue: But anyway: http://en.wikipedia.org/wiki/Stone–von_Neumann_theorem

Is it still unclear what will happen if you substitute specific, familiar operators into this argument (i.e. momentum and position)?

Not to me. Is it to you?

Also, what did you think about the paper I linked on mathematical surprises.

Can you give an example where a physicist might need to be concerned with the details regarding the transition from finite to infinite dimensional vector spaces?

I think it is obvious that you want some general rules concerning the spaces and operators you work with. Even for physicists, such general rules should be of immense importance. For example, a physicist also cares about rules like

[tex](g\circ f)^\prime (x) = f^\prime(x)g^\prime(f(x))[/tex]

even if not all functions ##g## and ##f## are physical or important.
 
  • #231
atyy
Science Advisor
14,673
3,137
Do you believe that a mathematical structure, such as a topological space, is on the same ontological footing as a physical object or phenomenon, such as an electron or the spin quantum Hall effect?

Yes! Even if one goes all the way back to ZFC, ZFC cannot be defined without non-rigourous language.
 
  • #232
1,049
780
Do you believe that a mathematical structure, such as a topological space, is on the same ontological footing as a physical object or phenomenon, such as an electron or the spin quantum Hall effect?

Getting into ontology of mathematical objects stuff might be beyond the the scope of this thread, and I'm not sure I'm qualified to answer that. Howeer, I like the Quine-Putnam indispensability argument (http://plato.stanford.edu/entries/mathphil-indis/) though I'm not completely convinced by it.

My original point though wasn't about mathematical structures, but the *practice* of mathematics. Sometimes the purpose of mathematical research is to support other mathematical research. I think you'd have a better grasp of the purpose of this research if you looked at the entire field as an entity, rather than pick out the bits that you don't think are useful.

Not meaning to get personal, and correct me if I'm mistaken, but, didn't you say you actually enjoyed mathematics or even preferred it? Why wouldn't that be a reason? It sounds like you are finding reasons not to pursue it.


-Dave K
 
  • #233
ZombieFeynman
Gold Member
328
12
I believe there are other canonically conjugate operators, but could not think of any off the top of my head. As an aside, what are some physically important ones?

Sure. Wavefunction phase and particle number. Angular momentum and angular orientation. Vector potential and current density. Electric potential and charge. Energy and time. There are more. But the ones above were probably covered in your "graduate level course." If they weren't, you should be refunded your tuition.
 
Last edited by a moderator:
  • #234
22,129
3,297
Since this thread has degenerated a lot since the beginning, I am locking it. Any further discussion on for example mathematics and physics questions can be dealt with in new threads.
 
  • Like
Likes 1 person

Related Threads on Do physics books butcher the math?

  • Last Post
Replies
2
Views
5K
Replies
6
Views
1K
Replies
6
Views
2K
  • Last Post
Replies
8
Views
994
  • Last Post
Replies
2
Views
2K
N
  • Last Post
Replies
17
Views
1K
Replies
7
Views
569
  • Last Post
Replies
1
Views
2K
Replies
5
Views
917
  • Last Post
Replies
6
Views
2K
Top