A Infinities in QFT (and physics in general)

  • #51
A. Neumaier said:
Changing the problem is never true understanding.
And there is more. Didn't Einstein change the problem of aether when he postulated that there is no aether? Didn't Planck changed the problem of UV catastrophe when he postulated discreteness in the form ##E=nh\nu## out of nowhere?
 
  • Like
Likes haushofer and vanhees71
Physics news on Phys.org
  • #52
Demystifier said:
Wasn't renormalization in QFT a sort of a change of problem? Didn't it produce a lot of understanding, e.g. in computation of g-2? Isn't lattice QCD also a change of problem?
No. It was changing the status of computations that lead to manifest and meaningless infinities from ill-defined to perturbatively well-defined, through a better understanding of what in the original foundations was meaningful and what was an artifact of naive manipulation.
Demystifier said:
Isn't lattice QCD also a change of problem?
No; it is a method of approximately solving QCD. To do it well one needs the continuum limit - a single lattice calculation is never trusted. See, e.g., point 4 of the recent post of André W-L in the context of muon g-2 figuring in the Nature article you cited. Thus lattices are useful for reliable physics prediction only in the context of the continuum theory.
 
  • #53
Demystifier said:
And there is more. Didn't Einstein change the problem of aether when he postulated that there is no aether? Didn't Planck changed the problem of UV catastrophe when he postulated discreteness in the form ##E=nh\nu## out of nowhere?
No. They replaced a questionable hypothesis by a much stronger one. While Nikolic-physics removes from physics all strong concepts (which need infinity).
 
  • #54
This conversation seems very unproductive to me. Of course, in the end, calculations are made on a computer with finite precision, but physics is about understanding the laws of nature and that's hardly possible without continnuum mathematics. Just think about what a great insight special relativity was. A simple gedanken experiment based on intuitive principles leads to the length contraction formula ##L' = L\sqrt{1-\frac{v^2}{c^2}}##. However, if there were no square roots in physics, there would be no sane way to argue in favor of special relativity and we would still not understand the origin of the Lorentz transformations. Continuum mathematics leads to simple and compelling insights about nature. There are no analogous gedanken experiments without continuum mathematics and thus, while we could write down formulas and put them on the computer, we would be unable to understand their origin. They would just be way less convincing.
 
  • Like
Likes A. Neumaier
  • #55
Nullstein said:
but physics is about understanding the laws of nature and that's hardly possible without continnuum mathematics.
I certainly agree that continuum mathematics helps a lot in understanding physics. But at the same time, in many cases it also creates serious difficulties: https://arxiv.org/abs/1609.01421
 
  • #56
That's true, but the appearance of difficulties doesn't refute continuum mathematics. We can only make things as simple as possible, but not simpler. If difficulties arise, they must certainly be fixed, but just dumping continuum mathematics if singularities appear is quite excessive and introduces more problems than it solves. And it is often the case that singularities arise exactly because we didn't worry enough about continuum mathematics. For instance, the inifinities in QFT are due to using perturbation theory in circumstances where perturbation theory is not applicable and the perturbation series don't converge (see e.g. Dyson's argument). By paying more attention to correctly taking the continuum limit, the singularities can be resolved and we obtain perfectly reasonable theories (so far at least in 1+1 or 2+1 dimensions). And while the problem is still difficult in 3+1 dimensions, the beauty and simplicity of the results in lower dimensions strongly suggest that this is the correct way to go. And these considerations even led to the insight, that there is new physics in the non-perturbative regime that is invisible to perturbation theory. So the initial difficulties really seem to be an argument in favor of continuum mathematics instead of one against it.
 
  • Like
Likes A. Neumaier
  • #57
There is a sense in which it is clearly true that we don't need the continuum. Any observations we make as long as they are finite precision (which they always are) can be simulated by a sufficiently complicated deterministic finite automaton.

But conceptually, if the universe really is discrete, there is the puzzle as to why it appears continuous. Assuming that it's not a computer program that was specifically designed to fool us...

So I personally would not find it satisfying to see an argument that things could be discrete. I would want an argument that there is a plausible (non ad-hoc) discrete model that you could prove gave rise to the illusion of continuous spacetime.
 
  • Like
Likes martinbn and A. Neumaier
  • #58
stevendaryl said:
But conceptually, if the universe really is discrete, there is the puzzle as to why it appears continuous.
Because the minimal distance is too small to be seen by our present experimental techniques. For instance, many theories argue that discreteness might start to appear at the Planck scale.
 
  • #59
Demystifier said:
Because the minimal distance is too small to be seen by our present experimental techniques. For instance, many theories argue that discreteness might start to appear at the Planck scale.

That's a slightly different issue. If the discrete model is the discrete counterpart to 4D spacetime, then at sufficiently large length scales, it might look continuous. But what is the reason that a discrete model would happen to look like the discrete counterpart to 4D spacetime, other than if you are trying to simulate the latter with the former?
 
  • #60
The central issue is: All plausible, intuitive and beautiful arguments that have been succesfully used to derive our best physical theories, such as e.g. symmetry principles, and that really make the difference between physics and stamp collection, heavily rely on continuum mathematics.

Sure, we could discretice our theories, but we would lose all deep insights that we had gained and it would convert physics into mere stamp collection. Unless we can come up with even more plausible, intuitive and beautiful arguments for discrete theories, we shouldn't go that route.
 
  • Like
Likes weirdoguy and martinbn
  • #61
gentzen said:
But what I had in mind was more related to a paradox in interpretation of probability than to an attack on using real numbers to describe reality. The paradox is how mathematics forces us to give precise values for probabilities, even for events which cannot be repeated arbitrarily often (not even in principle).
Turns out that I tried to clarify before what I had in mind shortly after I found lemur's comment ("QM is Nature's way of having to avoid dealing with an infinite number of bits") with a similar thought. I just reread the main article, and realized that lemur's comment was an ingenious defense of it (against arbenboba's criticism).
I want to add that I do appreciate Gisin's later work to make the connection to intuitionism, but even so I had contact to people working on dependent type theory, category theory, and all that higher order stuff, it never crossed my mind that there might be a connection to the riddle of how to avoid accidental infinite information content.

Demystifier said:
martinbn said:
But you had no objections to the rational numbrs, nor the integers.
Actually I did, but not explicitly. When I was talking about computations with a computer, I took for granted that a finite computer can represent only a finite set of different numbers.
Just like some physicists (Sidney Coleman) guess that what we really don't understand is classicality, Joel David Hamkins guesses that what we really don't understand is finiteness. Timothy Chow wondered: It still strikes me as difficult to construct a convincing heuristic argument for this point of view. I tried to give an intuitive explanation, highlighting the importance of using a prefix-free code as part of the encoding of a natural number (with infinite strings of 0s and 1s as starting point). But nobody seems to appreciate simple explanations. So I later wrote a long and convoluted post that very few will ever read (or even understand) in its entirety, with the click-bait title: Defining a natural number as a finite string of digits is circular. As expected, it was significantly more convincing, as wittnessed by reactions like: "I’d always taken it as a given that, if you don’t have a pre-existing understanding of what’s meant by a “finite positive integer,” then you can’t even get started in doing any kind of math without getting trapped in an infinite regress."
 
  • #62
Nullstein said:
The central issue is: All plausible, intuitive and beautiful arguments that have been succesfully used to derive our best physical theories, such as e.g. symmetry principles, and that really make the difference between physics and stamp collection, heavily rely on continuum mathematics.

Sure, we could discretice our theories, but we would lose all deep insights that we had gained and it would convert physics into mere stamp collection. Unless we can come up with even more plausible, intuitive and beautiful arguments for discrete theories, we shouldn't go that route.
One can have the continuum arise from the discrete, and symmetries can be emergent.

https://ocw.mit.edu/courses/physics...pring-2014/lecture-notes/MIT8_334S14_Lec1.pdf
"The averaged variables appropriate to these length and time scales are no longer the discrete set of particle degrees of freedom, but slowly varying continuous fields. For example, the velocity field that appears in the Navier–Stokes equations is quite distinct from the velocities of the individual particles in the fluid. Hence the productive method for the study of collective behavior in interacting systems is the Statistical Mechanics of Fields."

https://arxiv.org/abs/1106.4501
"We have seen how a strongly-coupled CFT (or even its discrete progenitors) can robustly lead,“holographically”, to emergent General Relativity and gauge theory in the AdS description."
 
  • Like
Likes Fra and Demystifier
  • #63
stevendaryl said:
That's a slightly different issue. If the discrete model is the discrete counterpart to 4D spacetime, then at sufficiently large length scales, it might look continuous. But what is the reason that a discrete model would happen to look like the discrete counterpart to 4D spacetime, other than if you are trying to simulate the latter with the former?
I don't know, that's an open question.
 
  • #64
@Demystifier, in theories where say Lorentz invariance emerges from a lattice, the discrete theory is still a quantum theory, so it is not totally discrete, since a discrete quantum theory uses a complex vector space. I assume you'd argue that is also in principle not insurmountable?
 
  • Like
Likes Demystifier
  • #65
A. Neumaier said:
This assumes that the universe has a finite lifetime, which is questionable.
I don't think at least humanity having a finite lifetime is a controversial opinion though.

Anyways, I am wondering if there is ever any way to test some of these claims within some confines. I wonder if there is some kind of counter intuitive phenomenon that can never be adequately finitely described, at least theoretically. I believe some topics in chaos theory study whether some deterministic systems have finite predictability regardless of how fine your knowledge of the initial conditions are. It feels like this might be somewhat related to this debate. Could it ever be shown that an experiment agrees with the theory but disagrees "discontinuously" beyond a well defined boundary inexplicable by errors with any finitization of the theory regardless of how fine that is? And if that happened, would it really say much?
 
  • #66
atyy said:
One can have the continuum arise from the discrete, and symmetries can be emergent.
I don't doubt this, but it doesn't defeat my point. Our current best theories have plausible and insightful justification behind them. We should replace them only if falsification forces us to abandom them or if we can come up with even more insightful theories. To date, no convincing and insightful arguments for discrete theories are known. All discrete attempts are plagued by ambiguities.

Here's an example: We might replace ##\frac{\mathrm df(x)}{\mathrm dx}## by ##\frac{f(x + 0.01) - f(x)}{0.01}##. If we do this, we probably lose all continuum symmetries, but we have introduced an ambiguity: Why ##0.01## and not ##0.0000043##? (And many more!) This is a completely undesirable situation, even if the continuum symmetries are emergent in this formalism.
 
  • #67
Nullstein said:
To date, no convincing and insightful arguments for discrete theories are known. All discrete attempts are plagued by ambiguities.

Here's an example: We might replace ##\frac{\mathrm df(x)}{\mathrm dx}## by ##\frac{f(x + 0.01) - f(x)}{0.01}##. If we do this, we probably lose all continuum symmetries, but we have introduced an ambiguity: Why ##0.01## and not ##0.0000043##? (And many more!) This is a completely undesirable situation, even if the continuum symmetries are emergent in this formalism.
Questioning notions of absolute infinity or uncountable infinite sets it not automatically the same thing as advocating using discrete theories instead of the continuum.

The ##0.01## might just be good enough for the moment, or comparable to the best we can do at the moment. And it is not important whether it is exactly ##0.01## or ##0.01234##. Independently, it may no longer be good enough later, or the best we can do might improve over time, so that later the achievable accuracy is closer to ##0.0000043##.

Even the discrete might not be as absolute as an idealized mathematical description suggests. I might tell you that some number is 42, only to tell you later that I read it in a degraded old book, and that it might have also been 47. But the chances that it is really 42 are much bigger than the chances of it being 47. What I try to show with this example is that adding more words later can reduce the information content of what has been said previously. But how much information can be removed later depends on the representation of the information. So representations are important in intuitionisitic mathematics, and classical mathematics is seen as a truncation where equivalence has been replaced by equality.

However, the criticism that "no convincing and insightful arguments" for alternative theories are known partly also applies to intuitionistic mathematics. There are too many different options, and the benefits are hard to nail down. The (necessary and sufficient) consistency strength of those theories is often not significantly different from comparable classical theories with "extremely uncountable" sets. Maybe this is because our ignorance of the potential infinite is uncountable beyond imagination, but I am not sure whether that is really part of the explanation.
 
  • #68
gentzen said:
Maybe this is because our ignorance of the potential infinite is uncountable beyond imagination, but I am not sure whether that is really part of the explanation.
It is because the notion of the potential infinite is (by standard incompleteness theorems) necessarily ambiguous, i.e., not all statements about it are decidable for any finite specification of the notion.
 
  • #69
Nullstein said:
I don't doubt this, but it doesn't defeat my point. Our current best theories have plausible and insightful justification behind them. We should replace them only if falsification forces us to abandom them or if we can come up with even more insightful theories. To date, no convincing and insightful arguments for discrete theories are known. All discrete attempts are plagued by ambiguities.

Here's an example: We might replace ##\frac{\mathrm df(x)}{\mathrm dx}## by ##\frac{f(x + 0.01) - f(x)}{0.01}##. If we do this, we probably lose all continuum symmetries, but we have introduced an ambiguity: Why ##0.01## and not ##0.0000043##? (And many more!) This is a completely undesirable situation, even if the continuum symmetries are emergent in this formalism.
So what? Suppose that we live in 19th century in which there is no direct evidence for the existence of atoms. We know the continuum fluid mechanics and if someone argued that the fluid is really made of small atoms, you would argue that it's ambiguous because we don't know how small exactly those atoms are supposed to be. Does it mean that the atom hypothesis is completely undesirable?
 
  • #70
Demystifier said:
So what? Suppose that we live in 19th century in which there is no direct evidence for the existence of atoms. We know the continuum fluid mechanics and if someone argued that the fluid is really made of small atoms, you would argue that it's ambiguous because we don't know how small exactly those atoms are supposed to be. Does it mean that the atom hypothesis is completely undesirable?
That's hardly the same situation. Atoms added great explanatory power to the theory and are a form of reductionism, which is generally desirable. They didn't just reproduce the old results and at the same time invalidate previous insights as would be the case with discretization. They solved an actual problem, while discretization is like running away from a problem, which is already well understood not to require such a radical deviation from our current formalism. There's no need to throw out the baby with the bathwater. Essentially, I'm just arguing in favor of Occam's razor. You can of course reject Occam's razor and that's fine, but then we have to agree to disagree.
 
  • #71
Demystifier said:
I suggest you to read some introduction to numerical analysis. Roughly it's like ordinary analysis, except with finite ##\Delta x## instead of infinitesimal ##d x##. And there are no ##\varepsilon##'s and ##\delta##'s.
I became fascinated with this concept when I first encountered gabriel's horn in calc ii in high school. It was mind bending to think about. I picked up computer modeling at that point just to demonstrate it couldn't be possible. Then I learned more about precision and numerical analysis. I suppose this is a little off topic, but I think it is some what relatable.
 
  • #72
A. Neumaier said:
Every introduction to numerical analysis (including the book I wrote on this topic) assumes real calculus when discussing differential equations.

If you work with discrete space and discrete time only, do you scrap all conservation laws? (But you even need one for Bohmian mechanics...)

Or how do you prove that energy is conserved for a particle in a time-independent external field?

Any attempt to give a full exposition of physics without using real numbers and continuity is doomed to failure. Not a single physics textbook does it. Claiming that physics does not need real numbers is simply ridiculous.
Noether's theorem, some basic assumptions about the universality (space-wise and time-wise) of physical laws, plus a lot of observations put symmetry and conservation laws on solid ground.
 
  • #73
Demystifier said:
It depends on what do you mean by "need". Does human physicist needs a pen and paper? Yes she does. Does human physicist needs her brain? Yes she does. But she needs them in a different sense. The latter is absolutely necessary, while the former is very very useful but not absolutely necessary. The need for real numbers is of the former kind. I can imagine an advanced civilization with advanced theoretical physics which does not use real numbers at all.
Hmm... Most PDE systems are likely unsolvable analytically. So we are only left with numerical approximations. Maybe quantum computing will prove that wrong.
 
  • #74
valenumr said:
Noether's theorem, some basic assumptions about the universality (space-wise and time-wise) of physical laws, plus a lot of observations put symmetry and conservation laws on solid ground.
Both Noether's theorem and conservation laws presupposes differential equations, hence real numbers.
 
Last edited:
  • Like
Likes dextercioby
  • #75
Nullstein said:
Here's an example: We might replace ##\frac{\mathrm df(x)}{\mathrm dx}## by ##\frac{f(x + 0.01) - f(x)}{0.01}##. If we do this, we probably lose all continuum symmetries, but we have introduced an ambiguity: Why ##0.01## and not ##0.0000043##? (And many more!) This is a completely undesirable situation, even if the continuum symmetries are emergent in this formalism.
You think Nature or Mathematics have to care about what is desirable for you?

Of course, discrete mathematics is much more complicate and less symmetric. Continuum mathematics is a useful for simplification. Simplification is something which can be reached also by approximation. The approximation is simple, but wrong. So what? Once it is fine as an approximation, we can use it, it is better than nothing. Once the exact computation is too complicate, we have no better choice than to choose the approximation. So, to use approximations is reasonable and fine.

But one should not confuse something being fine as an approximation with something being true. That's all.
 
  • #76
A. Neumaier said:
Both Noether's theorem and conservation laws presupposes differential equations, hence real numbers.
Noether's theorem and conservation laws exist also on lattices. See arxiv:1709.04788 (of course, this uses also real numbers, which nobody considers to be a problem.)
 
  • #77
I didn't notice this thread, but I share a lot of Demystifiers objections. I may have a difference route of reasoning, but as I try to reason from the perspective of an inside observer and reconstruction rules and laws from there, the notion of real numbers are not to be introduced lightly. For me it has to do with wether what is distinguishable from an insider observer. Also any inferences risk getting out of control if you introduce uncountable sets and loose track of limits. The apparent infinitites we see in physics today seems to be because order of things are lost and the work of cleaning it up via variois post-mess renormalization schemes is exactly what to me is ambigous.

I also see continuum mathematics as an "approximation" of the discrete systems in a high complexity limit. Working with combinatorics and permutation symmetries seems a lot more complicated for anything by small systems, so I think there is good reasons for the continuum mathematics, but in trying to understand some things, it is to me, not fundamental. Even cox and otehr reconstructions of probability that throws up a real number, puts me off.

My hunch is that once physics (and its measures) are reconstructed from an instrinci perspective, a lot of the infinities nevery should show up in the first place. Even though this is complicated and continuous symmetries may be understood as hypothetical large complexity limits of permutation symmetries, it may say us from resorting to ambigous renormalization methods. Intrisic constructions hopefully would come with natural regulators.

I find it pathlogical to think that an observer or agent can distinguish between the points in a contiuum. Even tough most people may agree on that, the embedding into real numbers makes it deceptive, as what is mathematical degrees of freeodom and what is of physical relevants. Especially in the foundations of a measurement theory because one one considers the physical entropy of a system, it's the embedding that defines the number.

/Fredrik
 
  • Like
Likes Demystifier
  • #78
Nullstein said:
This conversation seems very unproductive to me. Of course, in the end, calculations are made on a computer with finite precision, but physics is about understanding the laws of nature and that's hardly possible without continnuum mathematics.
What if you (which most don't, but it's a separate question) consider the idea that the laws of nature must be inferrable from the perspective of a real FINITE observer/agent. In this perspective, the agent is the "computer" and the agents actions should likely reflect the imperfections from the continuum approximatein except when neglectable?

/Fredrik
 
  • #79
This discussion reminded me of the famous debate between a prominent soviet physicst Yakov Zeldovich and no less prominent soviet mathematician Lev Pontryagin. As another great mathematician Vladimir Arnold remembers, it concerned a textbook "Higher Mathematics for Beginners and its Application to Physics" published by Zeldovich in the 60's. The textbook was heavily criticized by mathematicians and censors of math literature in the Soviet Union at the time. It contained, among other things, a definition of the derivative as a ratio of increments "under the assumption that they are small". This definition, although blatantly disrespectful and almost criminal from the point of view of orthodox mathematics, is completely justified physically. As Zeldovich argued, increments of a physical quantity smaller than, say, $10^{-100}$ are a pure fiction: the structure of spacetime on such scales may turn out to be very far from the mathematical continuum. Zeldovich continued to defend his position and the debate ended with his complete victory. In 1980, Pontryagin in his textbook on mathematical analysis for high school students wrote: “Many physicists believe that the so-called strict definition of derivatives and integrals is not necessary for a good understanding of differential and integral calculus. I share their point of view."
 
Last edited:
  • Like
  • Love
  • Sad
Likes vanhees71, Jimster41, dextercioby and 4 others
  • #80
physicsworks said:
As Zeldovich argued, increments of a physical quantity smaller than, say, $10^{-100}$ are a pure fiction: the structure of spacetime on such scales may turn out to be very far from the mathematical continuum.

But continuity or differentiability at that scale is also pure fiction; the noise in the values may be much bigger and would render the quotient meaningless.

You can see this in finite precision arithmetic (where the structure of reals breaks down at the scale of a relative error of about ##10^{-16}##. Try computing the derivative of ##3x^2-1.999999x## at x=1/3 by a difference quotient with ##dx=2^{-k}## for ##k=10:70##, say. The mathematical value of the derivative is ##10^{-6}## but in standard IEEE arithmetic, ##k=55## gives the value ##-2##, and ##k>55## gives zero. The best approximation is obtained for ##k=33## and ##k=34##, giving the quite inaccurate value ##0.95347\cdot 10^{-6}##.

The only definition that makes sense physically is therefore one where ##dx## is small but not too small. Since small and too small cannot be quantified exactly, it yields a fuzzy number, not an exact value.
 
Last edited:
  • Like
Likes dextercioby, gentzen and Greg Bernhardt
  • #81
A. Neumaier said:
But continuity or differentiability at that scale is also pure fiction; the noise in the values may be much bigger and would render the quotient meaningless.
...
Since small and too small cannot be quantified exactly, it yields a fuzzy number, not an exact value.
If physical interactions at their fundamental level is essentially "stochastic", the notion of continuity should not be needed to phrase the laws of physics. Ie. the laws of physics might not be cast in terms of differential equations, but in terms of random transitions between discrete states as guided random walks.

That's at least my vision, that the differential equations represent a nice way to represent things "on average" in certain limits. But it may be a potential fallacy ot make too strong deductions on physics using the power of continuum mathematics, because the continuum models may be the approximation, not the other way around.

/Fredrik
 
  • #82
Fra said:
But it may be a potential fallacy ot make too strong deductions on physics using the power of continuum mathematics, because the continuum models may be the approximation, not the other way around.
This is an undecidable issue since we never know the true laws of physics down to the tiniest scales.

On the basis of the success of the continuum methods together with Ockham's razor, it is therefore safe to assume that for practical purposes, the laws of nature are based on differential equations. At least all know laws are formulated in this way for several centuries, and there is no sign that this would have to change.
 
  • Like
Likes dextercioby
  • #83
A. Neumaier said:
This is an undecidable issue since we never know the true laws of physics down to the tiniest scales.
Yes, and this is also exactly why one argument is that one should start the constructions from the observer perspective. Such approch would not exlude continuum limits, but it would also not presume them.

A. Neumaier said:
On the basis of the success of the continuum methods together with Ockham's razor, it is therefore safe to assume that for practical purposes, the laws of nature are based on differential equations. At least all know laws are formulated in this way for several centuries, and there is no sign that this would have to change.
No need to argue about the success of current models, that is unquestionable. But the success of the current methods, is also why it's mentally difficult to release from it, so the questions are I think rational.

The question of unifying laws, is not so much a practical matter I think, it's more a principal matter that refers to explanation of current models. I think it's a matter of interpretation and choice of stance for future research, and I don't quite agree.

/Fredrik
 
  • #84
A. Neumaier said:
But these states are chosen adaptively, depending on the problem. Moreover, the approximation accuracy depends on the number of states chosen, and chemists add states until convergence is observed. This is possible only in an infinite-dimensional Hilbert space. And the basis varies from problem to problem, which shows how nonphysical the discretized setting is.
A. Neumaier said:
But these states are chosen adaptively, depending on the problem. Moreover, the approximation accuracy depends on the number of states chosen, and chemists add states until convergence is observed. This is possible only in an infinite-dimensional Hilbert space. And the basis varies from problem to problem, which shows how nonphysical the discretized setting is.
Why not start with an infinite number of states and subtract?
 
  • #85
A. Neumaier said:
Every introduction to numerical analysis (including the book I wrote on this topic) assumes real calculus when discussing differential equations.

If you work with discrete space and discrete time only, do you scrap all conservation laws? (But you even need one for Bohmian mechanics...)

Or how do you prove that energy is conserved for a particle in a time-independent external field?

Any attempt to give a full exposition of physics without using real numbers and continuity is doomed to failure. Not a single physics textbook does it. Claiming that physics does not need real numbers is simply ridiculous.
Doesn’t mean they exist.
 
  • #86
O
A. Neumaier said:
But continuity or differentiability at that scale is also pure fiction; the noise in the values may be much bigger and would render the quotient meaningless.

You can see this in finite precision arithmetic (where the structure of reals breaks down at the scale of a relative error of about ##10^{-16}##. Try computing the derivative of ##3x^2-1.999999x## at x=1/3 by a difference quotient with ##dx=2^{-k}## for ##k=10:70##, say. The mathematical value of the derivative is ##10^{-6}## but in standard IEEE arithmetic, ##k=55## gives the value ##-2##, and ##k>55## gives zero. The best approximation is obtained for ##k=33## and ##k=34##, giving the quite inaccurate value ##0.95347\cdot 10^{-6}##.

The only definition that makes sense physically is therefore one where ##dx## is small but not too small. Since small and too small cannot be quantified exactly, it yields a fuzzy number, not an exact value.
Interesting. It’s almost like you are making the opposite argument you are trying to make.
Also, sort of thought the idea of a discrete limit h (and or b) functioning like the speed limit of light was a key to @RUTA block world reconciliation of QM and GR.

I agree with him and Pontryagin. Calculus is a tool that depends on flying elephants. Still a good tool.
 
Last edited:
  • #87
A. Neumaier said:
If you work with discrete space and discrete time only, do you scrap all conservation laws?
One possibility to see it from an agent perspective:

Instead of seeing it so that you just "scrap" all the continuum symmetries, and loose powerful constraints and deductive power, one might instead see it so from an agent perspective, that we reject the continuum symmetries as valid intrinsic rules of reasoning, on the grounds that they are typically not inferrable with 100% confidence by a finite agent (because ones seeks an instrinsic reconstruction).

Instead one could embrace the inferrable but subjective and approximate symmetries and see they as guiding the actions in the evolving context, as the agents are interacting. One then replaces the constraint notion of observer equivalence with just observer democracy, and are then facing the problem of showing that the democratic process is compatible with an equivalence asymptotically in the relevant cases, and hopefully also explain WHY the symmetries ARE spontaneously emergent, but also WHY they are not perfect? IF that suceeds (which may not be the case of course), then it seems we have "added value" and insight. This is as I see one possible rationale for the thinking.

/Fredrik
 
  • #88
Jimster41 said:
Calculus is a tool that depends on flying elephants.
Can you explain?
 
  • #89
Demystifier said:
Can you explain?
my understanding of calculus is that it depends on the rules of convergence at infinite limits. I like the convergence part. I don’t like the introduction of something as as bizarre as “infinity” it’s like fundamentally undefinable except in the utterly abstract sense. Always found that frustrating.
 
Last edited:
  • Skeptical
  • Like
Likes PeroK, weirdoguy and Demystifier
  • #90
Difficulty with Infinity as “just another math widget” aside, I wonder if continuum assumptions are a barrier to answers.

“Evolutionary Dynamics” by Novak, specifically the phenomenon of evolutionary drift as an analogue of spontaneous symmetry breaking left me wondering if evolution as a phenomenon isn’t a clue to the answer of whether the limit is continuous.

Not my ideas: got them esp from Tannoudji’s “Universal Constants in Physics” and Chaisson’s “The Life Era” among others. Most recently I was excited to see @RUTA ’s modest proposal that such constants along with c sort of dictate (describe for us) the finite constraints of the lattice. Still working through his book.
 
Last edited:
  • #91
Jimster41 said:
Difficulty with Infinity as “just another math widget” aside, I wonder if continuum assumptions are a barrier to answers.
The first kind of difficulty is not what I think anyone else in thread referred to though, it was more the latter thing you mention.

Demystifier for example referred to this paper by Baez in post #19 with a discussion

Struggles with the Continuum
"We have seen that in every major theory of physics, challenging mathematical questions arise from the assumption that spacetime is a continuum.The continuum threatens us with infinities. Do these infinities threaten our ability to extract predictions from these theories—or even our ability to formulate these theories in a precise way? We can answer these questions, but only with hardwork. Is this a sign that we are somehow on the wrong track? Is the continuum as we understand it only an approximation to some deeper model of spacetime? Only time will tell. Nature is providing us with plenty of clues, but it will take patience to read them correctly."
-- https://arxiv.org/abs/1609.01421

/Fredrik
 
  • Like
Likes Demystifier, *now* and Jimster41
  • #92
I have to say that the discussion whether the derivative of a function can be defined using finite small differences is very strange. If you think so, then you are missing the whole point of analysis. If, on the other hand, you mean the rate of change of a physical quantity, then you may have a point. But derivatives!
 
  • #93
In an old conversation between Werner Heisenberg and Carl Friedrich von Weizsäcker, a compromise is offered: "die Vergangenheit ist in einem gewissen Sinne diskret, und die Zukunft ist kontinuierlich" (in a certain sense, the past is discrete, and the future is continuous). The subsequent elaboration makes is clear that they use the intuitionistic conception where the continuous future is only potentially infinite, instead of being a completed (uncountably) infinite set.
 
  • #94
A. Neumaier said:
But continuity or differentiability at that scale is also pure fiction; the noise in the values may be much bigger and would render the quotient meaningless.
And how does the absence of these help you with your argument? Also, before we even touch on differentiability, to talk about continuity you need the definition of a limit, which on such scales: a) may not exist in principle (in the usual epsilon-delta sense) and thus may need to be replaced with something else; b) if the usual definition is applicable, the limit as such may not exist due to some selection rule.
martinbn said:
I have to say that the discussion whether the derivative of a function can be defined using finite small differences is very strange. If you think so, then you are missing the whole point of analysis.
No one here, I believe, is talking about that. The point is that analysis may not be the right mathematical language to describe nature at very small scales. After all, if spacetime is doomed and, as some believe, quantum mechanics together with it will emerge from something more fundamental, why should analysis survive?
 
  • Skeptical
Likes weirdoguy and PeroK
  • #95
Sunil said:
Noether's theorem and conservation laws exist also on lattices. See arxiv:1709.04788 (of course, this uses also real numbers, which nobody considers to be a problem.)
This paper is still based on differential equations since it keeps continuous time and uses lattices only for the spatial part. Thus your observation not in conflict with my statements

A. Neumaier said:
Both Noether's theorem and conservation laws presuppose differential equations, hence real numbers.
A. Neumaier said:
On the basis of the success of the continuum methods together with Ockham's razor, it is therefore safe to assume that for practical purposes, the laws of nature are based on differential equations. At least all know laws are formulated in this way for several centuries, and there is no sign that this would have to change.
 
  • #96
physicsworks said:
“Many physicists believe that the so-called strict definition of derivatives and integrals is not necessary for a good understanding of differential and integral calculus. I share their point of view."
That begs the question - what is a "good" understanding?
 
  • Like
Likes vanhees71 and weirdoguy
  • #97
Jimster41 said:
my understanding of calculus is that it depends on the rules of convergence at infinite limits. I like the convergence part. I don’t like the introduction of something as as bizarre as “infinity” it’s like fundamentally undefinable except in the utterly abstract sense. Always found that frustrating.
Your understanding is wrong. The limit is defined using entirely the properties of finite numbers. That was the whole point of the rigorous mathematics developed in the 19th century: to remove any dependence on undefinable concepts.
 
  • Like
  • Skeptical
Likes Jimster41 and vanhees71
  • #98
physicsworks said:
why should analysis survive?
Because mathematics, and analysis in particular, is not dependent on physical theories. If the universe turns out to be finite, that doesn't mean that there are only finitely many prime numbers, for example.

Take the mathematical models of the spread of the COVID virus. The number of people infected can only be a whole number; but, it can still be modeled effectively using differential equations and all the power of calculus.
 
  • Like
  • Skeptical
Likes physicsworks and Demystifier
  • #99
Interested_observer said:
That begs the question - what is a "good" understanding?
The one that achieves the goal, in this case the goal of making physics computations the predictions of which agree with observations. In that sense, Newton's understanding of calculus and Dirac's understanding of his delta function was good.
 
  • Like
Likes physicsworks, vanhees71 and gentzen
  • #100
Demystifier said:
The one that achieves the goal, in this case the goal of making physics computations the predictions of which agree with observations.
This is only part of the goal. The real goal is to understand Nature in a way that allows us to make the best use of it. This needs much more than just making physics computations the predictions of which agree with observations.
 
  • Like
Likes Interested_observer

Similar threads

Replies
87
Views
7K
Replies
69
Views
6K
Replies
3
Views
2K
Replies
242
Views
25K
Replies
15
Views
2K
Replies
6
Views
2K
Back
Top