Why are Physicists so informal with mathematics?

  • Thread starter Thread starter TurtleKrampus
  • Start date Start date
  • Tags Tags
    Mathematics Physics
AI Thread Summary
The discussion highlights frustrations with the lack of mathematical rigor in physics courses, particularly for students with strong mathematical backgrounds. Participants express disappointment over professors' explanations that seem overly simplistic or incorrect, such as misrepresenting mathematical concepts like unordered sets. There is a debate about the necessity of rigor in physics versus mathematics, with some arguing that practical applications can be prioritized over formal proofs. The conversation also touches on the challenges of understanding concepts like time in different reference frames, emphasizing the operational nature of physics. Overall, the thread reflects a tension between the expectations of mathematically rigorous training and the realities of physics education.
  • #51
TurtleKrampus said:
Given two inertial reference frames (both being isomorphic to ##\mathbb R^3##) ##R_1,R_2## there exists some ##A \in SO(3)## and ##v \in C^2(\mathbb R, \mathbb R^3)## with ##v'' = 0## such that the projection of the position of some point mass ##r## to ##R_2##, which I'll denote ##\pi_2(r)##, factors through ##R_1## from it's projection of its position in ##R_1##, which I'll denote ##\pi_1(r)##, in the following way:
$$\pi_2(r) = A\pi_1(r) + v$$
I believe that this enlarges into time being invariant across the reference frames, since our transformation is (in terms of ##\pi_1(r)##) time invariant.
There are some minor things that I wrote, for example we can consider the domain of the translation to be an interval, but I wrote it for the sake of simplicity. I don't know if it's possible for reference frames to rotate through time, but if so we just replace A with a continuous function onto SO(3). This is a first thought into how I'd go about writing things, there are probably many errors here..
I would expect that after spelling out this unwieldy thing, you would appreciate why physicists just write t=t'.
 
  • Haha
  • Like
Likes haushofer, Rive and PeroK
Science news on Phys.org
  • #52
TurtleKrampus said:
Yes, t = t' makes no sense to me
But you said it means time is absolute. So you understood it. So how does it make no sense? I don't even think this is a math vs physics rigor thing. Have you never seen in pure maths x being the coordinates in one system, x' being the coordinates in another?
 
  • Like
Likes martinbn
  • #53
martinbn said:
No, why would it be lack of rigor?
Because we are using approximate solutions and an approximate solution is not rigorous unless you know an actual solution exists, and you have some kind of an error estimate for your approximate solution.
 
  • Like
Likes dextercioby
  • #54
andresB said:
Like the very slow converging analytical solution for the three-body problem?
That answers the question of whether or not we can find an analytical solution. It's not done to be useful for approximations
 
  • #55
AndreasC said:
I would expect that after spelling out this unwieldy thing, you would appreciate why physicists just write t=t'
Preferences I suppose
 
  • #56
TurtleKrampus said:
You didn't have to discredit financial economists... What approach do you suggest they should use?
Making a model gives a very good way to describe the assumptions the author used to study something
Coming up with detailed mathematical models is a waste of time when your model is completely off base. In many social sciences, it is recognized that mathematical models have very limited applicability so they don't use them. Economists want to pretend they are somehow different. They are not, they study social constructions, and social constructions are primarily controlled by social forces which are not well described by such models. That's not to say they have no applicability whatsoever, but it feels like a lot of it is just a thinly veiled game. To put it another way, Alexander the Great didn't study knot theory, he simply cut the Gordian knot.

Physics is not quite the same, because there quantitative mathematical models are very applicable. However, the full weight of MODERN mathematical formalism would only weigh down most areas of physics. That's not to say there are not areas in which it is profitable to work that way, but for others it is just a burden, either because the formalism clouds the intuitive ideas, because the formalism is unnecessarily complicated or even non-existent as of yet, or because the model is already kinda "wrong", and rigorous conclusions from an unsound model are still unsound.

In my opinion physics does need greater unity with mathematics, which does mean some linguistic changes should happen in physics. But that doesn't mean going to the other extreme, and it also works both ways, ie mathematics should steer a bit closer to physics as well. After all, that is how it developed historically, and still does to a certain extent. The modern standards of "rigor" would never have happened of there weren't the older, far less rigorous "classical" mathematics, and those mathematics would never have developed had there not arisen many questions from physics, questions which in turn could never have developed had physics waited for the final link in the chain! This is not just a historical artifact, it still happens today. Rigorous mathematical foundations of QFT is something that is important and interesting and close to my heart, but it is still a work in process which still has many gaping holes. Physicists could never have waited to use it until it somehow developed fully formed, skipping a bunch of steps in between.
 
  • Like
Likes dextercioby
  • #57
AndreasC said:
Because we are using approximate solutions and an approximate solution is not rigorous unless you know an actual solution exists, and you have some kind of an error estimate for your approximate solution.
May be people need to say what they mean by rigorous first. Here is an example: you have an equation, someone proves a uniqueness theorem, but noone has proven existence yet. Do you think that the uniqueness theorem is not rigorous?
 
  • #58
If you are complaining about physicists not doing rigorous math, what should I complain about, having to work everyday with biochemists? Its your job as a mathematician to bring physical ideas into Bourbaki style, not that of a physicist (they tend to fall immediately asleep with this task).
 
  • Like
Likes dextercioby
  • #59
DrDu said:
If you are complaining about physicists not doing rigorous math, what should I complain about, having to work everyday with biochemists? Its your job as a mathematician to bring physical ideas into Bourbaki style, not that of a physicist (they tend to fall immediately asleep with this task).
Just to point out that not all mathematicians think the same way.
https://www.math.fsu.edu/~wxm/Arnol...thematics that is,ministers) and of the users.
 
  • Like
  • Informative
Likes apostolosdt, DeBangis21, Frabjous and 1 other person
  • #60
AndreasC said:
Coming up with detailed mathematical models is a waste of time when your model is completely off base.
A model is just a way to formalize your ideas.
Creating a model allows you to call things theorems, based on the axioms (i.e. assumptions).
There is no loss in generality by creating a model, unless you assume the existence of some type of object / property which hasn't been made rigorous in the language of mathematics, but I assume, with things like stochastic calculus there won't be many things like this.

Ultimately the main limitation with a financial model will be on the influence of real time events, which are in general hard to predict (something like an Elon Musk tweet can influence, and has influenced before, Tesla's stock prices).
AndreasC said:
Physics is not quite the same, because there quantitative mathematical models are very applicable. However, the full weight of MODERN mathematical formalism would only weigh down most areas of physics.
I get that for the development of physics creating a model with rapidly changing beliefs / assumptions is not a good thing. What I argue is that for topics in physics which have been explored really well, and that do in fact have faithful models, I argue that introducing one of those models can be a good thing, specially when the people you're exposing the material to study math.
AndreasC said:
ie mathematics should steer a bit closer to physics as well. After all, that is how it developed historically, and still does to a certain extent. The modern standards of "rigor" would never have happened of there weren't the older, far less rigorous "classical" mathematics, and those mathematics would never have developed had there not arisen many questions from physics, questions which in turn could never have developed had physics waited for the final link in the chain!
Mathematics has become independent of Physics, topics that are closest to my heart, i.e. something like abstract algebra, Galois theory, representation theory, and to some extent category theory, wouldn't really gain anything by steering into Physics specifically (there are a things that can be motivated by Physics like studying the Heisenberg groups, but the theory itself doesn't really evolve by steering into Physics. In fact Physics isn't special in this aspect, a lot of branches of mathematics are influenced by other Sciences like game theory & Biology and PDEs & Chemistry).

The modern mathematician is probably less restricted to rigor than you might believe. Somewhat counterintuitively for new topics, you often look at a property some things satisfy and then write definitions based on constraints you needed to arrive at that, which ultimately become theorems. That is to say, the theorem in new areas often predates the definition. Or if you'd like, the definitions are motivated by the theorems, and often evolved over time.

When the theory becomes sufficiently well studied the first textbooks are written about it.
There are also experimental mathematical magazines, which expose patterns & possibly corresponding conjectures to "fit" those patterns.
 
  • #61
martinbn said:
May be people need to say what they mean by rigorous first. Here is an example: you have an equation, someone proves a uniqueness theorem, but noone has proven existence yet. Do you think that the uniqueness theorem is not rigorous?
Uniqueness means if it exists, it is unique. That's rigorous even if existence is not known. But an approximation has to approximate something that exists.
 
  • #62
martinbn said:
May be people need to say what they mean by rigorous first. Here is an example: you have an equation, someone proves a uniqueness theorem, but noone has proven existence yet. Do you think that the uniqueness theorem is not rigorous?
Uniqueness and existence are unrelated properties, though they are often paired up in differential equations.
Where you may expect something to be a solution of a differential equations, so first you may ask "does it make sense, do there exist solutions to this?" and secondly you may ask "ok, let's suppose that there are solutions, are they unique? so that I can use the unique solution to estimate future behaviour?".

Often times, asking if there's uniqueness is basically asking if we have specified enough initial / boundary conditions.
 
  • #63
AndreasC said:
Uniqueness means if it exists, it is unique. That's rigorous even if existence is not known. But an approximation has to approximate something that exists.
You can make approximations to problems without solutions, but yeah.
 
  • #64
TurtleKrampus said:
A model is just a way to formalize your ideas.
And some ideas gain nothing by being formalized, or are way too hard to formalize. When you only look at things that you can formalize, then you inevitably lose the big picture. And that is a huge part of modern economics, there is tons of literature of economists coming up with pristine mathematical analyses of a silly idealized scenaria that have nothing to do with the real world.

TurtleKrampus said:
What I argue is that for topics in physics which have been explored really well, and that do in fact have faithful models, I argue that introducing one of those models can be a good thing, specially when the people you're exposing the material to study math.
On the one hand, yeah. On the other hand, in the particular example you brought up, there is no discernible reason why you should bury the simple intuitive idea of different observers counting time the same way under a mountain of unnecessary formalism. Why would you demand that people know about Lie groups and projections and whatnot before you explain this simple fact? You need these things AFTER you get this point through, if you want to generalize.
TurtleKrampus said:
Physics, topics that are closest to my heart, i.e. something like abstract algebra, Galois theory, representation theory, and to some extent category theory, wouldn't really gain anything by steering into Physics specifically (there are a things that can be motivated by Physics like studying the Heisenberg groups, but the theory itself doesn't really evolve by steering into Physics. In fact to believe Physics isn't special in this aspect, a lot of branches of mathematics are influenced by other Sciences like game theory & Biology and PDEs & Chemistry).
About the other sciences, yes, I agree, and to rephrase, I meant mathematics should steer closer to the physical sciences in general, because that is the most fertile ground for inspiration, intuition, and new problems. Incidentally almost all of the topics you mentioned are VERY closely intertwined with physics, and even today there is cross pollination. For instance, representation theory is incredibly central to quantum mechanics (furthermore, a lot of techniques developed motivated specifically by its application in physics), and there are even very advanced, very abstract and pretty recent concepts (such as quantum groups) that are directly inspired by problems of physics. After all, 2 of the most famous open problems in mathematics today are explicitly related to physics.
TurtleKrampus said:
That is to say, the theorem in new areas often predates the definition
Exactly. So why would you want to insist that getting into physics starts the other way round?
 
  • #65
TurtleKrampus said:
You can make approximations to problems without solutions, but yeah.
With the standard of rigor of physics, yeah. But how are you supposed to do that in "rigorous", "pure" mathematics? To make an approximation rigorous, you need a rigorous error estimate. To get a rigorous error estimate, it has to err against something. If that something does not exist, then it's very unlikely you would be able to come up with an error estimate. Maybe one that is conditional to the solution's existence. But then it would still be suspect.
 
  • #66
AndreasC said:
With the standard of rigor of physics, yeah. But how are you supposed to do that in "rigorous", "pure" mathematics? To make an approximation rigorous, you need a rigorous error estimate. To get a rigorous error estimate, it has to err against something. If that something does not exist, then it's very unlikely you would be able to come up with an error estimate. Maybe one that is conditional to the solution's existence. But then it would still be suspect.
Im on my phone, but for example x² -2 has no rational roots, but we can approximate it with rationals. We just to talk about something similar to a error estimate of the difference of the squares, |x² -y²|, and substitute y² with 2.

Usually (? Not sure, not my area of interest but I assume so) this type of problem is solved using a type of completion. In this case a Pythagoric completion works to find all square roots.

Either way, I can't really talk, as I said I dunno. Maybe it does really only make sense if there exists some extension where you have existence
 
  • #67
TurtleKrampus said:
Im on my phone, but for example x² -2 has no rational roots, but we can approximate it with rationals. We just to talk about something similar to a error estimate of the difference of the squares, |x² -y²|, and substitute y² with 2.

Usually (? Not sure, not my area of interest but I assume so) this type of problem is solved using a type of completion. In this case a Pythagoric completion works to find all square roots.

Either way, I can't really talk, as I said I dunno. Maybe it does really only make sense if there exists some extension where you have existence
Yeah you are right there. But we were talking more about Navier-Stokes etc. Generally in physics there are various physically motivated simplifications and approximations, and nobody bothers to check what the error is, or whether there even exists a solution, as in Navier-Stokes. Ideally, we should have an error estimate, and we should know there is a solution. But sometimes that is not available and we have to do physics anyways. For instance existence has not been established for most realistic QFTs (Yang Mills being a famous open problem in mathematics) but physicists proceed with perturbation theory and other approximations regardless. Despite the fact that we don't know what the Hilbert space of these theories is, we write down state vectors and perform operations with them. It's not ideal but it's what we've got so far.
 
  • #68
AndreasC said:
And some ideas gain nothing by being formalized, or are way too hard to formalize. When you only look at things that you can formalize, then you inevitably lose the big picture. And that is a huge part of modern economics, there is tons of literature of economists coming up with pristine mathematical analyses of a silly idealized scenaria that have nothing to do with the real world.
I don't know financial economists so I really can't speak on that. Perhaps you should talk to one yourself if you believe that they're hindered, or revolutionize the field yourself.
AndreasC said:
On the one hand, yeah. On the other hand, in the particular example you brought up, there is no discernible reason why you should bury the simple intuitive idea of different observers counting time the same way under a mountain of unnecessary formalism. Why would you demand that people know about Lie groups and projections and whatnot before you explain this simple fact? You need these things AFTER you get this point through, if you want to generalize.
I mean, I don't think anyone cares about the physical / philosophical implications of time invariance (on my class that is), we will only use it for calculations at which point we'll have to write what it means in some sense that's compatible with what we'll be trying to do (like convert from a reference frame to another).
Maybe you don't see this, because you already have an understanding of how it works, but a sentence ##t=t'## can be ambiguous by itself. What does it mean mathematically, in the sense of how do I use it when doing algebra? (rhetorical questions)
But answering that questions already gives you a way to write what you mean with ##t = t'## which is more rigorous.
AndreasC said:
About the other sciences, yes, I agree, and to rephrase, I meant mathematics should steer closer to the physical sciences in general, because that is the most fertile ground for inspiration, intuition, and new problems. Incidentally almost all of the topics you mentioned are VERY closely intertwined with physics, and even today there is cross pollination. For instance, representation theory is incredibly central to quantum mechanics (furthermore, a lot of techniques developed motivated specifically by its application in physics), and there are even very advanced, very abstract and pretty recent concepts (such as quantum groups) that are directly inspired by problems of physics. After all, 2 of the most famous open problems in mathematics today are explicitly related to physics.
I think that you're mixing a few things, there are very few areas in mathematics where sciences give intuition on how to solve problems. What I meant by motivation is that most mathematicians don't want to study some object that has no known interest (also who'd want to fund that?).
A group appearing in something like Chemistry gives mathematicians motivation to study that specific group*, and some times these groups end up having interesting properties (and would overall just be easier to fund).
(* contrary to popular (?) belief most mathematicians don't want to do something completely without uses, though with uses I'm also counting uses within mathematics itself, the motivation of study is not just motivated by science).
Though to be honest I'm kinda interested in the Physics inspired techniques you're referring to
 
  • #69
AndreasC said:
Yeah you are right there. But we were talking more about Navier-Stokes etc. Generally in physics there are various physically motivated simplifications and approximations, and nobody bothers to check what the error is, or whether there even exists a solution, as in Navier-Stokes. Ideally, we should have an error estimate, and we should know there is a solution. But sometimes that is not available and we have to do physics anyways. For instance existence has not been established for most realistic QFTs (Yang Mills being a famous open problem in mathematics) but physicists proceed with perturbation theory and other approximations regardless. Despite the fact that we don't know what the Hilbert space of these theories is, we write down state vectors and perform operations with them. It's not ideal but it's what we've got so far.
I am completely out of my water here lmao
 
  • #70
TurtleKrampus said:
I mean, I don't think anyone cares about the physical / philosophical implications of time invariance (on my class that is), we will only use it for calculations at which point we'll have to write what it means in some sense that's compatible with what we'll be trying to do (like convert from a reference frame to another).
Well then this notation is definitely fine! In classical mechanics the position of a point particle in space is given by coordinates x,y,z, which are continuous functions of time t. In other words, its path through space is simply a curve. In another reference system, these are x',y',z', parameterized by time t'. But what you learn here is that you can use the same parameter for both. If you want to convert between reference frames in classical mechanics, you won't generally need all that stuff you mentioned, you will just need to make coordinate changes.

This gets more difficult when you get to special and general relativity. In that case you have to work with spacetime, in other words you have to add the time t as a coordinate, instead of just using it as a parameter, and it changes from frame to frame. At that point you are dealing with curves in non-Euclidean spaces, and to change reference frames you need differential geometry etc.

TurtleKrampus said:
Though to be honest I'm kinda interested in the Physics inspired techniques you're referring to
What I meant was not so much that problems were solved using physics intuitions, but rather that people who worked on the problems were often physicists or mathematicians very close to physics, who came up with techniques tailored towards physics applications, which later also found other uses. For representation theory, Wigner was a physicist who worked a lot on it for example, von Neumann was another influential one, and then you have people like Cartan who were very motivated by physics problems.

However, there is a bit of the other thing (physics inspired techniques used to solve math problems) as well. Ed Witten has done some of that (and he has a Fields medal for it). There is also various things in mathematics where their name betrays their physics origins (entropy, quantum groups, "sources" in differential equations, etc).

Since you are learning classical mechanics and want a mathematical treatment, you would like V.I. Arnold's book. By the way, he was one of the mathematicians arguing for mathematics to lean a bit closer to physics. But of course his language in the book is mathematical, and very rigorous throughout.
 
  • #71
TurtleKrampus said:
I am completely out of my water here lmao
Well, to put it more simply, quantum field theory uses a bunch of operators on a Hilbert space to describe "fields", such as the electromagnetic field etc, which give rise to "particles". In essence all these are partial differential equations, whose solutions are promoted to linear operators, via "quantization". That's all fine when the underlying PDE is simple, as is the case with the Klein-Gordon or Dirac equations. These can generally be solved, and the appropriate Hilbert space to study their "quantized" version is called a Fock space. We generally know how their solutions (called free fields) act on the Fock space.

Unfortunately, when you want to have fields interacting with each other, you end up with PDEs involving multiple different functions coupled in convoluted ways. Just Google "standard model Lagrangian". Almost every different letter you see is a different field. Now apply on that the Euler-Lagrange equations and you get an absolutely insane system of horribly coupled PDEs. Of course nobody with the whole thing at once, we just look at parts of it. One such part are the famous Yang-Mills equations: https://en.m.wikipedia.org/wiki/Yang–Mills_equations

The existence problem for the YM equations is a Millennium Prize problem. Actually, it's not just that one, none of the other realistic interacting QFTs have been solved in 4 spacetime dimensions. The reason the other ones aren't a Millennium Prize problem is probably mostly that many people don't think they even really have a solution, for various reasons, never mind they are still used.

But to quantize, say, the Klein-Gordon equation, the best way is to just solve the ("classical") PDE first, and then to promote the solution to an operator in a specific sense. So how are we supposed to quantize the interacting ones when we can't solve them? Physicists have some workarounds. Probably the most commonly used one is a type of functional integral called a path integral. This is another tool that came out of physics (originally to describe Brownian motion if I'm not mistaken) that has been applied to other areas of math. The idea is, you somehow integrate on a space of functions. Of course, to integrate you need a measure. For some specific functional integrals, this measure is known. For the path integrals of QFT, there is no rigorous formalization of the measure as of yet. Nevertheless, it is used.

And then on top of all of that, you do perturbation theory. What's that? Well, we have a Hilbert space we don't really know, and operators representing fields that we have quantized, nevermind the fact that we don't rigorously know how they act on that Hilbert space (or even if we can rigorously consider these actions, because they relate to PDEs we don't know the solutions to), and we want to approximate the solutions to various problems regarding their action on these spaces, using power series. Great.

Perhaps you will find it amusing to learn that these power series have divergent terms. But maybe you already heard that, and heard that you can just do renormalization, etc. Indeed, renormalization generally fixes the problem, and Epstein-Glaser theory shows how you do that rigorously, starting from first principles, in a manner that is not ad hoc. Only physicists usually don't do that and follow a much less rigorous counterterm procedure, that is easier to work with. But at least we know we can cure the divergences. Trouble is, even AFTER you cure these divergences in the terms of the series, the series STILL diverges if you include every term, as in, it has ZERO radius of convergence. The physicist answer to this? "Well I'll just keep the first few terms of the series, which don't diverge". Well, in some cases people use some other summation schemes, like Borel summation etc. But sometimes that doesn't work.

So, to summarize, we start from PDEs that we don't know how to solve or if they even have solutions, we quantize them via integration measures that don't exist, and then we approximate the solutions to various problems using series that don't converge, by just ignoring the rest of the series, at least when we can get each term to converge. And it's not even a cutting edge theory, it's been around for decades. Not just that, but it is probably THE most successful physical theory ever, that has yielded the most precise predictions. This is how physicists learn to be less formal with math.

To learn about QFT, you may be interested in these books, written mostly for mathematicians, by mathematicians:

https://www.amazon.com/dp/0821847058/?tag=pfamazon01-20
https://www.amazon.com/dp/1316510271/?tag=pfamazon01-20

The second one is essentially a more digested version of the first one, including only the things that for the most part are known, but being significantly bigger. It's also interesting to see Talagrand's comments throughout the text indicating his struggle to understand why various things work. Really that's the main strength of the book imo, the fact that when he covers something that is very suspicious but nevertheless works, he says it explicitly. However I'm not sure how much you would get out of these books without further background into physics. Maybe you could try reading the Arnold book I mentioned, then maybe something like Quantum Theory for Mathematicians by Brian Hall, and then the Talagrand book (or the Folland book if you prefer). You will also see how much of QM and QFT really is just representation theory, and see why it was a huge motivator for its development.
 
  • #72
Why are physicists so informal with mathematics? Perhaps they fear excess rigor may lead to rigor mortis.
 
  • Like
Likes CalcNerd, vanhees71, jasonRF and 3 others
  • #73
Some quotes from prominent physicists:

Weyl: Space is a field of linear operators.
Heisenberg: Nonsense, Space is blue and birds fly through it.

Asher Peres: Quantum phenomena occur in a laboratory, not a hilbert space.

I forgot who said this but it was probably an experimentalist:
" You can keep your hilbert space, I need the answer in volts"

Physicists tend to be relaxed when applying mathematics. They don't often focus on Peano axioms, Dedekind cuts and other axioms of mathematics.

By the way, I do not hold 100% with the iconoclastic viewpoints above. Einstein himself often lamented he wished he knew more mathematics. However I realize that physicists today are busy with publishing duties and competition, and they need to get on with doing physics.
 
  • Haha
Likes vanhees71
  • #74
mpresic3 said:
Some quotes from prominent physicists:

Weyl: Space is a field of linear operators.
Heisenberg: Nonsense, Space is blue and birds fly through it.

Asher Peres: Quantum phenomena occur in a laboratory, not a hilbert space.

I forgot who said this but it was probably an experimentalist:
" You can keep your hilbert space, I need the answer in volts"

Physicists tend to be relaxed when applying mathematics. They don't often focus on Peano axioms, Dedekind cuts and other axioms of mathematics.

By the way, I do not hold 100% with the iconoclastic viewpoints above. Einstein himself often lamented he wished he knew more mathematics. However I realize that physicists today are busy with publishing duties and competition, and they need to get on with doing physics.
Some very different ways of how people think or how is their orienting.

Try this idea between Mathematics and Physics.
Mathematics - build and test the machine
Physics - Use the machine for what it is made for;
I am not saying that those comments are perfect. Just showing a way of thought.
 
  • Like
Likes vanhees71
  • #75
mpresic3 said:
Some quotes from prominent physicists:

Weyl: Space is a field of linear operators.
Heisenberg: Nonsense, Space is blue and birds fly through it.

Asher Peres: Quantum phenomena occur in a laboratory, not a hilbert space.

I forgot who said this but it was probably an experimentalist:
" You can keep your hilbert space, I need the answer in volts"

Physicists tend to be relaxed when applying mathematics. They don't often focus on Peano axioms, Dedekind cuts and other axioms of mathematics.

By the way, I do not hold 100% with the iconoclastic viewpoints above. Einstein himself often lamented he wished he knew more mathematics. However I realize that physicists today are busy with publishing duties and competition, and they need to get on with doing physics.
That quote isn't by Weyl, it's by Felix Bloch.
Not sure where that misconception came from, but mathematicians don't often focus on Peano axioms nor Dedekind cuts when doing math (nor are Dedekind cuts axioms, they're a construction).
 
  • #76
symbolipoint said:
Some very different ways of how people think or how is their orienting.

Try this idea between Mathematics and Physics.
Mathematics - build and test the machine
Physics - Use the machine for what it is made for;
I am not saying that those comments are perfect. Just showing a way of thought.
I don't like that analogy at all, sounds too reductive of mathematics to me.
 
  • #77
Can anyone point to an example where a lack of rigor led to a wrong physical result by an otherwise competent physicist?
 
  • #78
bob012345 said:
Can anyone point to an example where a lack of rigor led to a wrong physical result by an otherwise competent physicist?
The question is to what extent physics would have developed all the requisite mathematics without the rigorous mathematical research. A good example would be Noether's Theorem. Could physicists have figured out the key criteria without Emmy Noether having worked it out rigorously - or Sophus Lie having developed the theory of Lie groups in the first place?

It's one thing to demostrate a free and easy version of a mathematical theorem, but another thing to develop the theory non-rigorously in the first place.

You could say the same about group theory, linear algebra, functional analysis, complex analysis, topology and differential geometry in general. Yes, you can use the mathematics non-rigorously, but would it have been developed non-rigorously in the first place?
 
  • Like
Likes TurtleKrampus and martinbn
  • #79
PeroK said:
The question is to what extent physics would have developed all the requisite mathematics without the rigorous mathematical research. A good example would be Noether's Theorem. Could physicists have figured out the key criteria without Emmy Noether having worked it out rigorously - or Sophus Lie having developed the theory of Lie groups in the first place?

It's one thing to demostrate a free and easy version of a mathematical theorem, but another thing to develop the theory non-rigorously in the first place.

You could say the same about group theory, linear algebra, functional analysis, complex analysis, topology and differential geometry in general. Yes, you can use the mathematics non-rigorously, but would it have been developed non-rigorously in the first place?
What does Noether's theorem say about parity if anything?
 
  • #80
Parity is not a continuous symmetry.
 
  • Like
Likes vanhees71 and bob012345
  • #81
TurtleKrampus said:
I don't like that analogy at all, sounds too reductive of mathematics to me.
What I said was this, which you reacted to:
Some very different ways of how people think or how is their orienting.

Try this idea between Mathematics and Physics.
Mathematics - build and test the machine
Physics - Use the machine for what it is made for;
I am not saying that those comments are perfect. Just showing a way of thought.

That was the best I could think at the current time. As I plainly said, the comment is not perfect. Have you a thought along the lines of the original posted topic about Mathematics differently handled between Physicists and Mathematicians, and if you want to share with readers here, then say those thoughts.
 
  • #82
Why are mathematicians so formal with physics?
 
  • Like
Likes haushofer and symbolipoint
  • #83
Frabjous said:
Why are mathematicians so formal with physics?
Are they?
 
  • #84
PeroK said:
It's one thing to demostrate a free and easy version of a mathematical theorem, but another thing to develop the theory non-rigorously in the first place.

You could say the same about group theory, linear algebra, functional analysis, complex analysis, topology and differential geometry in general. Yes, you can use the mathematics non-rigorously, but would it have been developed non-rigorously in the first place?
Almost all mathematics (until a certain point at least) was developed non-rigorously first. I was recently reading Riemann. There is NOTHING rigorous in modern terms about his writings. In many respects they were LESS rigorous than a lot of modern theoretical physics. He wildly asserts without proper proof all over the place, and he often does not give proper definitions of things (which actually makes it confusing sometimes). He also missed some counterexamples to his claims as was demonstrated by others later. In fact mathematics from that era have a long history of mathematicians successively finding errors with the previous theories, improving them, and then erring themselves, only to be corrected later by others.

Even today, to find new things mathematicians don't tend to start from the "rigorous" picture. They use intuition and imprecise concepts, and restore the rigor only after they have found their result. Intuition plays a significant role. Read Terence Tao's perspective on this: https://terrytao.wordpress.com/career-advice/theres-more-to-mathematics-than-rigour-and-proofs/

Another place where I noticed this was Richard Borcherds' lectures that he uploads on YouTube. I don't remember what it was exactly but he described a theorem as saying "you can often find x". "Often" of course is an extremely imprecise word. But it is useful in that context because if you are too rigorous about things you tend to forget what you are really doing. In the course of devising a new proof or theorem, rigor is often first ignored, and after the skeleton of the new construct has been heuristically worked out, mathematicians polish it and introduce back the rigor. The end product is presented rigorously, which makes the process seem like it never happened, which leads to this misconception.
 
  • Like
Likes vanhees71
  • #85
AndreasC said:
Almost all mathematics (until a certain point at least) was developed non-rigorously first. I was recently reading Riemann. There is NOTHING rigorous in modern terms about his writings. In many respects they were LESS rigorous than a lot of modern theoretical physics. He wildly asserts without proper proof all over the place, and he often does not give proper definitions of things (which actually makes it confusing sometimes). He also missed some counterexamples to his claims as was demonstrated by others later. In fact mathematics from that era have a long history of mathematicians successively finding errors with the previous theories, improving them, and then erring themselves, only to be corrected later by others.

Even today, to find new things mathematicians don't tend to start from the "rigorous" picture. They use intuition and imprecise concepts, and restore the rigor only after they have found their result. Intuition plays a significant role. Read Terence Tao's perspective on this: https://terrytao.wordpress.com/career-advice/theres-more-to-mathematics-than-rigour-and-proofs/

Another place where I noticed this was Richard Borcherds' lectures that he uploads on YouTube. I don't remember what it was exactly but he described a theorem as saying "you can often find x". "Often" of course is an extremely imprecise word. But it is useful in that context because if you are too rigorous about things you tend to forget what you are really doing. In the course of devising a new proof or theorem, rigor is often first ignored, and after the skeleton of the new construct has been heuristically worked out, mathematicians polish it and introduce back the rigor. The end product is presented rigorously, which makes the process seem like it never happened, which leads to this misconception.
I don't agree with most of this. All of the examples seem perfectly reborous to me.
 
  • #86
AndreasC said:
Even today, to find new things mathematicians don't tend to start from the "rigorous" picture. They use intuition and imprecise concepts, and restore the rigor only after they have found their result. Intuition plays a significant role. Read Terence Tao's perspective on this: https://terrytao.wordpress.com/career-advice/theres-more-to-mathematics-than-rigour-and-proofs/
Having read that link, I believe you are seriously misrepresenting Tao's position.
 
  • #87
AndreasC said:
Almost all mathematics (until a certain point at least) was developed non-rigorously first. I was recently reading Riemann. There is NOTHING rigorous in modern terms about his writings. In many respects they were LESS rigorous than a lot of modern theoretical physics. He wildly asserts without proper proof all over the place, and he often does not give proper definitions of things (which actually makes it confusing sometimes). He also missed some counterexamples to his claims as was demonstrated by others later. In fact mathematics from that era have a long history of mathematicians successively finding errors with the previous theories, improving them, and then erring themselves, only to be corrected later by others.
This is precisely my point. Mathematical rigour was (re-)introduced in the 19th Century in order to establish which intuitive results were correct, which false and the precise hypotheses required for the result to hold. Riemann largely pre-dates this. Mathematics had progressed to the point where no one could figure out what was true and what was not. The rigorous method largely post-dates Riemann. Modern mathematics (from 1860) couldn't exist without it. Tao actually emphaises its importance in the link you provided (with my underlining):

The “post-rigorous” stage, in which one has grown comfortable with all the rigorous foundations of one’s chosen field, and is now ready to revisit and refine one’s pre-rigorous intuition on the subject, but this time with the intuition solidly buttressed by rigorous theory. (For instance, in this stage one would be able to quickly and accurately perform computations in vector calculus by using analogies with scalar calculus, or informal and semi-rigorous use of infinitesimals, big-O notation, and so forth, and be able to convert all such calculations into a rigorous argument whenever required.)
 
  • Like
Likes vanhees71
  • #88
As an undergraduate, we had a TA that was ungodly smart. The joke that went around was that if one got lost in a proof, just write down “trivially“. The TA would get to it, agree that it was trivial, and give one credit.
In reality, if I wrote down this proof, it would be non-rigorous. If the TA wrote down the identical proof it would be rigorous. Rigor is a construct defined by the audience. The OP is in Tao’s phase two. The physics is not addressed to that audience.
 
  • Like
Likes vanhees71 and PeroK
  • #89
PeroK said:
The rigorous method largely post-dates Riemann. Modern mathematics (from 1860) couldn't exist without it.
Yes, obviously rigor played a huge part in mathematics reaching where they did, but usually you first discover something new non rigorously, and then you reintroduce rigor and expand on things. One famous example is the Dirac delta function. Dirac used it completely unrigorously (and in fact physicists still do). He didn't start from a rigorous concept and simplified it. He started from this tool, and then mathematicians developed the theory of distributions, which is rigorous and far more powerful than just this one thing.

Bottom line is, I agree with you that modern mathematics can't reach to where it has without rigor. However I disagree that the relationship is as simple as "rigorous result first, then simplification". I think there is a lot more of back and forth. And really most of the math physics uses was developed unrigorously, because it is a lot more basic and manageable than modern mathematics, so rigorous foundations are not necessary to inform your intuition. But moving forward rigor takes part in the process as described by Tao, refining tools and intuition and building hugely complex structures.

By the way, even if you read Poincare (long after Riemann), it's not really rigorous by modern standards for the most part. It's really only after Bourbaki perhaps (though some people were doing it earlier) that things got pretty hardcore, rigor wise. Which has its place, I don't discount that. It's just that the relationship between the two is a little bit complex, and this is why physics can survive with these standards.
 
Last edited:
  • Like
Likes vanhees71
  • #90
AndreasC said:
Yes, obviously rigor played a huge part in mathematics reaching where they did, but usually you first discover something new non rigorously, and then you reintroduce rigor and expand on things.
I don't believe this is the way mathematicians have worked in the past 150 years. The onus is on you to prove that this is true. In particular, this statement is entirely false.
but usually you first discover something new non rigorously, and then you reintroduce rigor
I think it's obvious from that that you have never done any mathematical research.

AndreasC said:
One famous example is the Dirac delta function. Dirac used it completely unrigorously (and in fact physicists still do).
That is one example, but is far from the norm.
AndreasC said:
And really most of the math physics uses was developed unrigorously,
I'd like you to prove this claim. I don't believe this is true.
AndreasC said:
, so rigorous foundations are not necessary to inform your intuition.
They are according to Terence Tao. See the link above. Tao directly contradicts what you say.
AndreasC said:
But moving forward rigor takes part in the process described by Tao, refining tools and building hugely complex structures.
IMO, you've misunderstood what Tao is saying.
 
  • #91
PeroK said:
They are according to Terence Tao. See the link above. Tao directly contradicts what you say.
But neither my statement nor I believe Tao's is universal here... Obviously some things are intuitive even without having to go through this process. It's why physicists can get away with not knowing the rigorous foundations and still extract good results.

PeroK said:
I don't believe this is the way mathematicians have worked in the past 150 years. The onus is on you to prove that this is true. In particular, this statement is entirely false.
I have a few arguments to that effect. First, the cutoff is not as sharp or as old as you claim. As I said before, even as soon as Poincare things are not that rigorous. Second, look at the importance conjectures have. A conjecture is not proven. However you have things such as the Langlands program and many others which are entirely about exploring the ramifications of conjectures, or trying to prove them. But how were the conjectures formulated in the first place? They did not follow from rigorous foundations directly, or they would be proved. But they are not just random assertions either, they are somehow special, and seem "likely" to be true. In a sense, you could say that the whole process until a conjecture is proved is the moment of intuitive discovery, before it is polished and made rigorous, suspended in time for years.

Now I don't have to work much to prove that most mathematics used in physics was either not developed rigorously or could have developed some other way. Most of it developed before the 20th century, so that's that... At the boundaries of course, the situation changes and you are right.
 
  • Skeptical
Likes PeroK
  • #92
AndreasC said:
Second, look at the importance conjectures have. A conjecture is not proven.
An unproven conjecture is not non-rigorous mathematics. Those are two very different things.
 
  • #93
AndreasC said:
But how were the conjectures formulated in the first place?
Through rigorous mathematics.
AndreasC said:
They did not follow from rigorous foundations directly
Yes they did.
AndreasC said:
But they are not just random assertions either, they are somehow special, and seem "likely" to be true. In a sense, you could say that the whole process until a conjecture is proved is the moment of intuitive discovery, before it is polished and made rigorous, suspended in time for years.
This is a fantasy.
 
  • #94
AndreasC said:
Well, to put it more simply, quantum field theory uses a bunch of operators on a Hilbert space to describe "fields", such as the electromagnetic field etc, which give rise to "particles". In essence all these are partial differential equations, whose solutions are promoted to linear operators, via "quantization". That's all fine when the underlying PDE is simple, as is the case with the Klein-Gordon or Dirac equations. These can generally be solved, and the appropriate Hilbert space to study their "quantized" version is called a Fock space. We generally know how their solutions (called free fields) act on the Fock space.

Unfortunately, when you want to have fields interacting with each other, you end up with PDEs involving multiple different functions coupled in convoluted ways. Just Google "standard model Lagrangian". Almost every different letter you see is a different field. Now apply on that the Euler-Lagrange equations and you get an absolutely insane system of horribly coupled PDEs. Of course nobody with the whole thing at once, we just look at parts of it. One such part are the famous Yang-Mills equations: https://en.m.wikipedia.org/wiki/Yang–Mills_equations

The existence problem for the YM equations is a Millennium Prize problem. Actually, it's not just that one, none of the other realistic interacting QFTs have been solved in 4 spacetime dimensions. The reason the other ones aren't a Millennium Prize problem is probably mostly that many people don't think they even really have a solution, for various reasons, never mind they are still used.

But to quantize, say, the Klein-Gordon equation, the best way is to just solve the ("classical") PDE first, and then to promote the solution to an operator in a specific sense. So how are we supposed to quantize the interacting ones when we can't solve them? Physicists have some workarounds. Probably the most commonly used one is a type of functional integral called a path integral. This is another tool that came out of physics (originally to describe Brownian motion if I'm not mistaken) that has been applied to other areas of math. The idea is, you somehow integrate on a space of functions. Of course, to integrate you need a measure. For some specific functional integrals, this measure is known. For the path integrals of QFT, there is no rigorous formalization of the measure as of yet. Nevertheless, it is used.

And then on top of all of that, you do perturbation theory. What's that? Well, we have a Hilbert space we don't really know, and operators representing fields that we have quantized, nevermind the fact that we don't rigorously know how they act on that Hilbert space (or even if we can rigorously consider these actions, because they relate to PDEs we don't know the solutions to), and we want to approximate the solutions to various problems regarding their action on these spaces, using power series. Great.

Perhaps you will find it amusing to learn that these power series have divergent terms. But maybe you already heard that, and heard that you can just do renormalization, etc. Indeed, renormalization generally fixes the problem, and Epstein-Glaser theory shows how you do that rigorously, starting from first principles, in a manner that is not ad hoc. Only physicists usually don't do that and follow a much less rigorous counterterm procedure, that is easier to work with. But at least we know we can cure the divergences. Trouble is, even AFTER you cure these divergences in the terms of the series, the series STILL diverges if you include every term, as in, it has ZERO radius of convergence. The physicist answer to this? "Well I'll just keep the first few terms of the series, which don't diverge". Well, in some cases people use some other summation schemes, like Borel summation etc. But sometimes that doesn't work.

So, to summarize, we start from PDEs that we don't know how to solve or if they even have solutions, we quantize them via integration measures that don't exist, and then we approximate the solutions to various problems using series that don't converge, by just ignoring the rest of the series, at least when we can get each term to converge. And it's not even a cutting edge theory, it's been around for decades. Not just that, but it is probably THE most successful physical theory ever, that has yielded the most precise predictions. This is how physicists learn to be less formal with math.

To learn about QFT, you may be interested in these books, written mostly for mathematicians, by mathematicians:

https://www.amazon.com/dp/0821847058/?tag=pfamazon01-20
https://www.amazon.com/dp/1316510271/?tag=pfamazon01-20

The second one is essentially a more digested version of the first one, including only the things that for the most part are known, but being significantly bigger. It's also interesting to see Talagrand's comments throughout the text indicating his struggle to understand why various things work. Really that's the main strength of the book imo, the fact that when he covers something that is very suspicious but nevertheless works, he says it explicitly. However I'm not sure how much you would get out of these books without further background into physics. Maybe you could try reading the Arnold book I mentioned, then maybe something like Quantum Theory for Mathematicians by Brian Hall, and then the Talagrand book (or the Folland book if you prefer). You will also see how much of QM and QFT really is just representation theory, and see why it was a huge motivator for its development.
I should've responded to this sooner, but this message is too large for me to possibly respond to its entirety (mostly because when I've tried I get bored less than midway). Anyway, thanks for your messages, I've read them I just can't really respond to them if that makes sense. I'm not really interested in self studying QFT, at least at the moment. I'm using most of my "study time" for either my URS or just regular classes. Once again, I really appreciate the answers.
 
  • #95
symbolipoint said:
What I said was this, which you reacted to:


That was the best I could think at the current time. As I plainly said, the comment is not perfect. Have you a thought along the lines of the original posted topic about Mathematics differently handled between Physicists and Mathematicians, and if you want to share with readers here, then say those thoughts.
Yeah, I don't have / didn't want to think of a better analogy, sorry if this sounds harsh. I don't care how physicists do math, not even most math done by mathematicians is thought of rigorously (though there are plenty of cases of just bashing, which you may usually start with rigor so you don't waste time). You have intuition, and see where that leads you.

Again, my problem was in the exposure, please disregard the original post I guess.
Just the other day I had a really awful experience with my Physics professor while introducing preliminary knowledge to Lagrangian mechanics. The basic gist is since the professor basically never mentions the type of domain / codomain of functions (implicitly or explicitly), him writing f(X), where X is a function of a real variable t, as opposed to say something like ##(f \circ X)## lead to a lot of unnecessary ambiguity, that did indeed lead to unnecessary confusion, since at a later point we mention the derivative of f and write f(X(t)) as opposed to f(X)(t).
There are some important details that I also left out, but I don't really want to write out since they'd require a lot more context and I honestly cba to write all of it out.
 
  • #96
PeroK said:
Through rigorous mathematics.
Let me put it another way. Most mathematicians believe the Riemann hypothesis is correct. But this suspicion is not grounded in some kind of rigorous proof, obviously. It's just a suspicion, supported by non rigorous arguments. Nevertheless, this suspicion is part of the process of real math even if it is not officially admitted as theorem. In fact, the conjecture would not have been formulated and widely investigated like that had there not been an intuitive jump from what was rigorously known to something that still to this day isn't. It's also telling that entire fields are founded on conjectures. If mathematicians didn't admit non-rigorous methods at all, these fields would he considered a waste of time, until the conjectures were proven or disproven.

One example that I just thought of is Perelman's proof of the Poincare conjecture. Perelman is of course thought of as the one who proved it, but his proof was more like a proof sketch. It still had some very non trivial gaps, non-rigorous "jumps" that he made to reach the solution, that were later filled in by other mathematicians (I believe it was Cao and someone else? I don't remember, you can look it up). This to me reveals a that the creative jump is usually one that is non-rigorous, and supported by (informed) intuition.

Looking back at your original post, I realize that I actually don't really disagree with it much, and perhaps I didn't express myself well enough either. The main point of disagreement is that I don't think most of the unrigorous math in physics is really a "simplification" of a rigorous result, for instance the way calculus is used in physics with hand wavy infinitesimals etc is close to how calculus was originally conceived, before being formalized and made rigorous with epsilons and deltas. But other than that, I don't really disagree that rigor was immensely helpful in building the complex mathematical structures we have today.
 
  • #97
AndreasC said:
Let me put it another way. Most mathematicians believe the Riemann hypothesis is correct. But this suspicion is not grounded in some kind of rigorous proof, obviously. It's just a suspicion, supported by non rigorous arguments. Nevertheless, this suspicion is part of the process of real math even if it is not officially admitted as theorem. In fact, the conjecture would not have been formulated and widely investigated like that had there not been an intuitive jump from what was rigorously known to something that still to this day isn't. It's also telling that entire fields are founded on conjectures. If mathematicians didn't admit non-rigorous methods at all, these fields would he considered a waste of time, until the conjectures were proven or disproven.

One example that I just thought of is Perelman's proof of the Poincare conjecture. Perelman is of course thought of as the one who proved it, but his proof was more like a proof sketch. It still had some very non trivial gaps, non-rigorous "jumps" that he made to reach the solution, that were later filled in by other mathematicians (I believe it was Cao and someone else? I don't remember, you can look it up). This to me reveals a that the creative jump is usually one that is non-rigorous, and supported by (informed) intuition.

Looking back at your original post, I realize that I actually don't really disagree with it much, and perhaps I didn't express myself well enough either. The main point of disagreement is that I don't think most of the unrigorous math in physics is really a "simplification" of a rigorous result, for instance the way calculus is used in physics with hand wavy infinitesimals etc is close to how calculus was originally conceived, before being formalized and made rigorous with epsilons and deltas. But other than that, I don't really disagree that rigor was immensely helpful in building the complex mathematical structures we have today.
The suspicion that the Riemann hypothesis is true is mostly due to how it relates to the structure of prime numbers, it being false would produce very interesting mathematics. Do not be deceived, there are results in both ways, there are results that hold if the Riemann hypothesis is true and those that hold if it's false.
Regardless, mathematics isn't necessarily the study of proving theorems, conditional results (i.e. if A then B) are still considered mathematics, and completely rigorous. (A premise doesn't have to be true for what's stated to be rigorous / valid.
There are also multiple mathematicians that believe / want to believe that the Riemann hypothesis is false, while they are a minority they do exist.

Humans are flawed creatures, researches when writing papers won't write every single mathematical step, they're bound to mistakes.
Most of them are somewhat naive, you recognize the similarity between things and think "since this holds for X, this is bound to be true here because I can turn this into X", when there's a subtle error in the thinking. Just the other day I thought "surely the set of affine transformations on X forms a ring, since I can embed them in a matrix algebra", this argument was flawed because the embedding doesn't preserve addition in 1 entry (mind you, I should've known better, as if this were true it'd be a very well known result).
People who do mathematics, specially with very long arguments, don't have time to check the validity of every small proposition they "swear" is true, specially when at a sufficiently large level you can't simply google "is X true?" and get an answer.

Does this mean that mathematics is unrigorous? I believe not at all, because first and foremost the published material will always have formal language / very easily formalized language, there should never be any ambiguity about what is meant. Secondly, the peer review process exists for this very reason, people will read what you write, and they'll spot those very propositions and check if they do indeed hold.

I don't care about how people arrive at the idea, I only care about its exposition. How people think of anything is their business, and I don't care at all.

But like, calculus HAS been formalized. There's no need for 'new' objects, which I honestly believe will only give a false sense of understanding and that do not help at all with computation.
 
  • #98
TurtleKrampus said:
The suspicion that the Riemann hypothesis is true is mostly due to how it relates to the structure of prime numbers, it being false would produce very interesting mathematics.
Right, that's my point, as with many other conjectures, there are arguments, that are not rigorous but nevertheless sound plausible and indicate research directions etc.

My general point is that there is a point during discovery or during devising a proof where mathematicians have to take a leap, that they typically fill up later. They didn't always fill it up later, but today the structures they deal with are incredibly complex and it has proven necessary to do that.
TurtleKrampus said:
But like, calculus HAS been formalized. There's no need for 'new' objects, which I honestly believe will only give a false sense of understanding and that do not help at all with computation.
Right, calculus has been formalized. But now we get to physics and how physicists use math. The thing is, physicists are first of all constrained and informed by physical reality, on top of the mathematical structure. For instance, for a body thrown upwards in a gravitational field, you can tell its trajectory will have a critical point, without doing any math. You can also tell it will be continuous, etc. So while in this particular example you could be very formal with it, it's kind of a burden to carry all that baggage. You won't encounter the pathological cases that mathematicians deal with so you can safely just ignore complications to keep things easy.

That works most of the time, but not all of the time. Sometimes you need some more serious math to really get to the bottom of something. But that's only a small part of physics research, so it's not what most physicists are taught, although maybe they should.
 
  • #99
@AndreasC I beleave you use the word "rigorous" differently than the way mathematicians use it.
 
  • Like
Likes Vanadium 50 and PeroK
  • #100
martinbn said:
@AndreasC I beleave you use the word "rigorous" differently than the way mathematicians use it.
You might be right but I think maybe the issue is that it's not really used consistently in general...
 

Similar threads

Replies
4
Views
3K
Replies
34
Views
6K
Replies
32
Views
376
Replies
65
Views
10K
Replies
32
Views
2K
Replies
9
Views
1K
Replies
10
Views
5K
Replies
6
Views
2K
Back
Top