A Infinities in QFT (and physics in general)

Click For Summary
The discussion centers on the relationship between quantum field theory (QFT) and Bell nonlocality, emphasizing that Bell's nonlocality can be derived from finite-dimensional Hilbert spaces without invoking QFT or relativity. Participants express skepticism about the existence of actual infinities in the physical world, arguing that infinite-dimensional Hilbert spaces are merely idealizations for mathematical convenience. The Reeh-Schlieder theorem is highlighted as a rigorous expression of nonlocality in axiomatic QFT, but its connection to Bell inequalities is deemed less direct. The conversation also touches on the practical implications of computational methods in quantum mechanics, with a focus on how physicists navigate the challenges of infinite-dimensional spaces. Ultimately, the discourse reflects a broader philosophical debate on the nature of mathematical models versus physical reality.
  • #61
gentzen said:
But what I had in mind was more related to a paradox in interpretation of probability than to an attack on using real numbers to describe reality. The paradox is how mathematics forces us to give precise values for probabilities, even for events which cannot be repeated arbitrarily often (not even in principle).
Turns out that I tried to clarify before what I had in mind shortly after I found lemur's comment ("QM is Nature's way of having to avoid dealing with an infinite number of bits") with a similar thought. I just reread the main article, and realized that lemur's comment was an ingenious defense of it (against arbenboba's criticism).
I want to add that I do appreciate Gisin's later work to make the connection to intuitionism, but even so I had contact to people working on dependent type theory, category theory, and all that higher order stuff, it never crossed my mind that there might be a connection to the riddle of how to avoid accidental infinite information content.

Demystifier said:
martinbn said:
But you had no objections to the rational numbrs, nor the integers.
Actually I did, but not explicitly. When I was talking about computations with a computer, I took for granted that a finite computer can represent only a finite set of different numbers.
Just like some physicists (Sidney Coleman) guess that what we really don't understand is classicality, Joel David Hamkins guesses that what we really don't understand is finiteness. Timothy Chow wondered: It still strikes me as difficult to construct a convincing heuristic argument for this point of view. I tried to give an intuitive explanation, highlighting the importance of using a prefix-free code as part of the encoding of a natural number (with infinite strings of 0s and 1s as starting point). But nobody seems to appreciate simple explanations. So I later wrote a long and convoluted post that very few will ever read (or even understand) in its entirety, with the click-bait title: Defining a natural number as a finite string of digits is circular. As expected, it was significantly more convincing, as wittnessed by reactions like: "I’d always taken it as a given that, if you don’t have a pre-existing understanding of what’s meant by a “finite positive integer,” then you can’t even get started in doing any kind of math without getting trapped in an infinite regress."
 
Physics news on Phys.org
  • #62
Nullstein said:
The central issue is: All plausible, intuitive and beautiful arguments that have been succesfully used to derive our best physical theories, such as e.g. symmetry principles, and that really make the difference between physics and stamp collection, heavily rely on continuum mathematics.

Sure, we could discretice our theories, but we would lose all deep insights that we had gained and it would convert physics into mere stamp collection. Unless we can come up with even more plausible, intuitive and beautiful arguments for discrete theories, we shouldn't go that route.
One can have the continuum arise from the discrete, and symmetries can be emergent.

https://ocw.mit.edu/courses/physics...pring-2014/lecture-notes/MIT8_334S14_Lec1.pdf
"The averaged variables appropriate to these length and time scales are no longer the discrete set of particle degrees of freedom, but slowly varying continuous fields. For example, the velocity field that appears in the Navier–Stokes equations is quite distinct from the velocities of the individual particles in the fluid. Hence the productive method for the study of collective behavior in interacting systems is the Statistical Mechanics of Fields."

https://arxiv.org/abs/1106.4501
"We have seen how a strongly-coupled CFT (or even its discrete progenitors) can robustly lead,“holographically”, to emergent General Relativity and gauge theory in the AdS description."
 
  • Like
Likes Fra and Demystifier
  • #63
stevendaryl said:
That's a slightly different issue. If the discrete model is the discrete counterpart to 4D spacetime, then at sufficiently large length scales, it might look continuous. But what is the reason that a discrete model would happen to look like the discrete counterpart to 4D spacetime, other than if you are trying to simulate the latter with the former?
I don't know, that's an open question.
 
  • #64
@Demystifier, in theories where say Lorentz invariance emerges from a lattice, the discrete theory is still a quantum theory, so it is not totally discrete, since a discrete quantum theory uses a complex vector space. I assume you'd argue that is also in principle not insurmountable?
 
  • Like
Likes Demystifier
  • #65
A. Neumaier said:
This assumes that the universe has a finite lifetime, which is questionable.
I don't think at least humanity having a finite lifetime is a controversial opinion though.

Anyways, I am wondering if there is ever any way to test some of these claims within some confines. I wonder if there is some kind of counter intuitive phenomenon that can never be adequately finitely described, at least theoretically. I believe some topics in chaos theory study whether some deterministic systems have finite predictability regardless of how fine your knowledge of the initial conditions are. It feels like this might be somewhat related to this debate. Could it ever be shown that an experiment agrees with the theory but disagrees "discontinuously" beyond a well defined boundary inexplicable by errors with any finitization of the theory regardless of how fine that is? And if that happened, would it really say much?
 
  • #66
atyy said:
One can have the continuum arise from the discrete, and symmetries can be emergent.
I don't doubt this, but it doesn't defeat my point. Our current best theories have plausible and insightful justification behind them. We should replace them only if falsification forces us to abandom them or if we can come up with even more insightful theories. To date, no convincing and insightful arguments for discrete theories are known. All discrete attempts are plagued by ambiguities.

Here's an example: We might replace ##\frac{\mathrm df(x)}{\mathrm dx}## by ##\frac{f(x + 0.01) - f(x)}{0.01}##. If we do this, we probably lose all continuum symmetries, but we have introduced an ambiguity: Why ##0.01## and not ##0.0000043##? (And many more!) This is a completely undesirable situation, even if the continuum symmetries are emergent in this formalism.
 
  • #67
Nullstein said:
To date, no convincing and insightful arguments for discrete theories are known. All discrete attempts are plagued by ambiguities.

Here's an example: We might replace ##\frac{\mathrm df(x)}{\mathrm dx}## by ##\frac{f(x + 0.01) - f(x)}{0.01}##. If we do this, we probably lose all continuum symmetries, but we have introduced an ambiguity: Why ##0.01## and not ##0.0000043##? (And many more!) This is a completely undesirable situation, even if the continuum symmetries are emergent in this formalism.
Questioning notions of absolute infinity or uncountable infinite sets it not automatically the same thing as advocating using discrete theories instead of the continuum.

The ##0.01## might just be good enough for the moment, or comparable to the best we can do at the moment. And it is not important whether it is exactly ##0.01## or ##0.01234##. Independently, it may no longer be good enough later, or the best we can do might improve over time, so that later the achievable accuracy is closer to ##0.0000043##.

Even the discrete might not be as absolute as an idealized mathematical description suggests. I might tell you that some number is 42, only to tell you later that I read it in a degraded old book, and that it might have also been 47. But the chances that it is really 42 are much bigger than the chances of it being 47. What I try to show with this example is that adding more words later can reduce the information content of what has been said previously. But how much information can be removed later depends on the representation of the information. So representations are important in intuitionisitic mathematics, and classical mathematics is seen as a truncation where equivalence has been replaced by equality.

However, the criticism that "no convincing and insightful arguments" for alternative theories are known partly also applies to intuitionistic mathematics. There are too many different options, and the benefits are hard to nail down. The (necessary and sufficient) consistency strength of those theories is often not significantly different from comparable classical theories with "extremely uncountable" sets. Maybe this is because our ignorance of the potential infinite is uncountable beyond imagination, but I am not sure whether that is really part of the explanation.
 
  • #68
gentzen said:
Maybe this is because our ignorance of the potential infinite is uncountable beyond imagination, but I am not sure whether that is really part of the explanation.
It is because the notion of the potential infinite is (by standard incompleteness theorems) necessarily ambiguous, i.e., not all statements about it are decidable for any finite specification of the notion.
 
  • #69
Nullstein said:
I don't doubt this, but it doesn't defeat my point. Our current best theories have plausible and insightful justification behind them. We should replace them only if falsification forces us to abandom them or if we can come up with even more insightful theories. To date, no convincing and insightful arguments for discrete theories are known. All discrete attempts are plagued by ambiguities.

Here's an example: We might replace ##\frac{\mathrm df(x)}{\mathrm dx}## by ##\frac{f(x + 0.01) - f(x)}{0.01}##. If we do this, we probably lose all continuum symmetries, but we have introduced an ambiguity: Why ##0.01## and not ##0.0000043##? (And many more!) This is a completely undesirable situation, even if the continuum symmetries are emergent in this formalism.
So what? Suppose that we live in 19th century in which there is no direct evidence for the existence of atoms. We know the continuum fluid mechanics and if someone argued that the fluid is really made of small atoms, you would argue that it's ambiguous because we don't know how small exactly those atoms are supposed to be. Does it mean that the atom hypothesis is completely undesirable?
 
  • #70
Demystifier said:
So what? Suppose that we live in 19th century in which there is no direct evidence for the existence of atoms. We know the continuum fluid mechanics and if someone argued that the fluid is really made of small atoms, you would argue that it's ambiguous because we don't know how small exactly those atoms are supposed to be. Does it mean that the atom hypothesis is completely undesirable?
That's hardly the same situation. Atoms added great explanatory power to the theory and are a form of reductionism, which is generally desirable. They didn't just reproduce the old results and at the same time invalidate previous insights as would be the case with discretization. They solved an actual problem, while discretization is like running away from a problem, which is already well understood not to require such a radical deviation from our current formalism. There's no need to throw out the baby with the bathwater. Essentially, I'm just arguing in favor of Occam's razor. You can of course reject Occam's razor and that's fine, but then we have to agree to disagree.
 
  • #71
Demystifier said:
I suggest you to read some introduction to numerical analysis. Roughly it's like ordinary analysis, except with finite ##\Delta x## instead of infinitesimal ##d x##. And there are no ##\varepsilon##'s and ##\delta##'s.
I became fascinated with this concept when I first encountered gabriel's horn in calc ii in high school. It was mind bending to think about. I picked up computer modeling at that point just to demonstrate it couldn't be possible. Then I learned more about precision and numerical analysis. I suppose this is a little off topic, but I think it is some what relatable.
 
  • #72
A. Neumaier said:
Every introduction to numerical analysis (including the book I wrote on this topic) assumes real calculus when discussing differential equations.

If you work with discrete space and discrete time only, do you scrap all conservation laws? (But you even need one for Bohmian mechanics...)

Or how do you prove that energy is conserved for a particle in a time-independent external field?

Any attempt to give a full exposition of physics without using real numbers and continuity is doomed to failure. Not a single physics textbook does it. Claiming that physics does not need real numbers is simply ridiculous.
Noether's theorem, some basic assumptions about the universality (space-wise and time-wise) of physical laws, plus a lot of observations put symmetry and conservation laws on solid ground.
 
  • #73
Demystifier said:
It depends on what do you mean by "need". Does human physicist needs a pen and paper? Yes she does. Does human physicist needs her brain? Yes she does. But she needs them in a different sense. The latter is absolutely necessary, while the former is very very useful but not absolutely necessary. The need for real numbers is of the former kind. I can imagine an advanced civilization with advanced theoretical physics which does not use real numbers at all.
Hmm... Most PDE systems are likely unsolvable analytically. So we are only left with numerical approximations. Maybe quantum computing will prove that wrong.
 
  • #74
valenumr said:
Noether's theorem, some basic assumptions about the universality (space-wise and time-wise) of physical laws, plus a lot of observations put symmetry and conservation laws on solid ground.
Both Noether's theorem and conservation laws presupposes differential equations, hence real numbers.
 
Last edited:
  • Like
Likes dextercioby
  • #75
Nullstein said:
Here's an example: We might replace ##\frac{\mathrm df(x)}{\mathrm dx}## by ##\frac{f(x + 0.01) - f(x)}{0.01}##. If we do this, we probably lose all continuum symmetries, but we have introduced an ambiguity: Why ##0.01## and not ##0.0000043##? (And many more!) This is a completely undesirable situation, even if the continuum symmetries are emergent in this formalism.
You think Nature or Mathematics have to care about what is desirable for you?

Of course, discrete mathematics is much more complicate and less symmetric. Continuum mathematics is a useful for simplification. Simplification is something which can be reached also by approximation. The approximation is simple, but wrong. So what? Once it is fine as an approximation, we can use it, it is better than nothing. Once the exact computation is too complicate, we have no better choice than to choose the approximation. So, to use approximations is reasonable and fine.

But one should not confuse something being fine as an approximation with something being true. That's all.
 
  • #76
A. Neumaier said:
Both Noether's theorem and conservation laws presupposes differential equations, hence real numbers.
Noether's theorem and conservation laws exist also on lattices. See arxiv:1709.04788 (of course, this uses also real numbers, which nobody considers to be a problem.)
 
  • #77
I didn't notice this thread, but I share a lot of Demystifiers objections. I may have a difference route of reasoning, but as I try to reason from the perspective of an inside observer and reconstruction rules and laws from there, the notion of real numbers are not to be introduced lightly. For me it has to do with wether what is distinguishable from an insider observer. Also any inferences risk getting out of control if you introduce uncountable sets and loose track of limits. The apparent infinitites we see in physics today seems to be because order of things are lost and the work of cleaning it up via variois post-mess renormalization schemes is exactly what to me is ambigous.

I also see continuum mathematics as an "approximation" of the discrete systems in a high complexity limit. Working with combinatorics and permutation symmetries seems a lot more complicated for anything by small systems, so I think there is good reasons for the continuum mathematics, but in trying to understand some things, it is to me, not fundamental. Even cox and otehr reconstructions of probability that throws up a real number, puts me off.

My hunch is that once physics (and its measures) are reconstructed from an instrinci perspective, a lot of the infinities nevery should show up in the first place. Even though this is complicated and continuous symmetries may be understood as hypothetical large complexity limits of permutation symmetries, it may say us from resorting to ambigous renormalization methods. Intrisic constructions hopefully would come with natural regulators.

I find it pathlogical to think that an observer or agent can distinguish between the points in a contiuum. Even tough most people may agree on that, the embedding into real numbers makes it deceptive, as what is mathematical degrees of freeodom and what is of physical relevants. Especially in the foundations of a measurement theory because one one considers the physical entropy of a system, it's the embedding that defines the number.

/Fredrik
 
  • Like
Likes Demystifier
  • #78
Nullstein said:
This conversation seems very unproductive to me. Of course, in the end, calculations are made on a computer with finite precision, but physics is about understanding the laws of nature and that's hardly possible without continnuum mathematics.
What if you (which most don't, but it's a separate question) consider the idea that the laws of nature must be inferrable from the perspective of a real FINITE observer/agent. In this perspective, the agent is the "computer" and the agents actions should likely reflect the imperfections from the continuum approximatein except when neglectable?

/Fredrik
 
  • #79
This discussion reminded me of the famous debate between a prominent soviet physicst Yakov Zeldovich and no less prominent soviet mathematician Lev Pontryagin. As another great mathematician Vladimir Arnold remembers, it concerned a textbook "Higher Mathematics for Beginners and its Application to Physics" published by Zeldovich in the 60's. The textbook was heavily criticized by mathematicians and censors of math literature in the Soviet Union at the time. It contained, among other things, a definition of the derivative as a ratio of increments "under the assumption that they are small". This definition, although blatantly disrespectful and almost criminal from the point of view of orthodox mathematics, is completely justified physically. As Zeldovich argued, increments of a physical quantity smaller than, say, $10^{-100}$ are a pure fiction: the structure of spacetime on such scales may turn out to be very far from the mathematical continuum. Zeldovich continued to defend his position and the debate ended with his complete victory. In 1980, Pontryagin in his textbook on mathematical analysis for high school students wrote: “Many physicists believe that the so-called strict definition of derivatives and integrals is not necessary for a good understanding of differential and integral calculus. I share their point of view."
 
Last edited:
  • Like
  • Love
  • Sad
Likes vanhees71, Jimster41, dextercioby and 4 others
  • #80
physicsworks said:
As Zeldovich argued, increments of a physical quantity smaller than, say, $10^{-100}$ are a pure fiction: the structure of spacetime on such scales may turn out to be very far from the mathematical continuum.

But continuity or differentiability at that scale is also pure fiction; the noise in the values may be much bigger and would render the quotient meaningless.

You can see this in finite precision arithmetic (where the structure of reals breaks down at the scale of a relative error of about ##10^{-16}##. Try computing the derivative of ##3x^2-1.999999x## at x=1/3 by a difference quotient with ##dx=2^{-k}## for ##k=10:70##, say. The mathematical value of the derivative is ##10^{-6}## but in standard IEEE arithmetic, ##k=55## gives the value ##-2##, and ##k>55## gives zero. The best approximation is obtained for ##k=33## and ##k=34##, giving the quite inaccurate value ##0.95347\cdot 10^{-6}##.

The only definition that makes sense physically is therefore one where ##dx## is small but not too small. Since small and too small cannot be quantified exactly, it yields a fuzzy number, not an exact value.
 
Last edited:
  • Like
Likes dextercioby, gentzen and Greg Bernhardt
  • #81
A. Neumaier said:
But continuity or differentiability at that scale is also pure fiction; the noise in the values may be much bigger and would render the quotient meaningless.
...
Since small and too small cannot be quantified exactly, it yields a fuzzy number, not an exact value.
If physical interactions at their fundamental level is essentially "stochastic", the notion of continuity should not be needed to phrase the laws of physics. Ie. the laws of physics might not be cast in terms of differential equations, but in terms of random transitions between discrete states as guided random walks.

That's at least my vision, that the differential equations represent a nice way to represent things "on average" in certain limits. But it may be a potential fallacy ot make too strong deductions on physics using the power of continuum mathematics, because the continuum models may be the approximation, not the other way around.

/Fredrik
 
  • #82
Fra said:
But it may be a potential fallacy ot make too strong deductions on physics using the power of continuum mathematics, because the continuum models may be the approximation, not the other way around.
This is an undecidable issue since we never know the true laws of physics down to the tiniest scales.

On the basis of the success of the continuum methods together with Ockham's razor, it is therefore safe to assume that for practical purposes, the laws of nature are based on differential equations. At least all know laws are formulated in this way for several centuries, and there is no sign that this would have to change.
 
  • Like
Likes dextercioby
  • #83
A. Neumaier said:
This is an undecidable issue since we never know the true laws of physics down to the tiniest scales.
Yes, and this is also exactly why one argument is that one should start the constructions from the observer perspective. Such approch would not exlude continuum limits, but it would also not presume them.

A. Neumaier said:
On the basis of the success of the continuum methods together with Ockham's razor, it is therefore safe to assume that for practical purposes, the laws of nature are based on differential equations. At least all know laws are formulated in this way for several centuries, and there is no sign that this would have to change.
No need to argue about the success of current models, that is unquestionable. But the success of the current methods, is also why it's mentally difficult to release from it, so the questions are I think rational.

The question of unifying laws, is not so much a practical matter I think, it's more a principal matter that refers to explanation of current models. I think it's a matter of interpretation and choice of stance for future research, and I don't quite agree.

/Fredrik
 
  • #84
A. Neumaier said:
But these states are chosen adaptively, depending on the problem. Moreover, the approximation accuracy depends on the number of states chosen, and chemists add states until convergence is observed. This is possible only in an infinite-dimensional Hilbert space. And the basis varies from problem to problem, which shows how nonphysical the discretized setting is.
A. Neumaier said:
But these states are chosen adaptively, depending on the problem. Moreover, the approximation accuracy depends on the number of states chosen, and chemists add states until convergence is observed. This is possible only in an infinite-dimensional Hilbert space. And the basis varies from problem to problem, which shows how nonphysical the discretized setting is.
Why not start with an infinite number of states and subtract?
 
  • #85
A. Neumaier said:
Every introduction to numerical analysis (including the book I wrote on this topic) assumes real calculus when discussing differential equations.

If you work with discrete space and discrete time only, do you scrap all conservation laws? (But you even need one for Bohmian mechanics...)

Or how do you prove that energy is conserved for a particle in a time-independent external field?

Any attempt to give a full exposition of physics without using real numbers and continuity is doomed to failure. Not a single physics textbook does it. Claiming that physics does not need real numbers is simply ridiculous.
Doesn’t mean they exist.
 
  • #86
O
A. Neumaier said:
But continuity or differentiability at that scale is also pure fiction; the noise in the values may be much bigger and would render the quotient meaningless.

You can see this in finite precision arithmetic (where the structure of reals breaks down at the scale of a relative error of about ##10^{-16}##. Try computing the derivative of ##3x^2-1.999999x## at x=1/3 by a difference quotient with ##dx=2^{-k}## for ##k=10:70##, say. The mathematical value of the derivative is ##10^{-6}## but in standard IEEE arithmetic, ##k=55## gives the value ##-2##, and ##k>55## gives zero. The best approximation is obtained for ##k=33## and ##k=34##, giving the quite inaccurate value ##0.95347\cdot 10^{-6}##.

The only definition that makes sense physically is therefore one where ##dx## is small but not too small. Since small and too small cannot be quantified exactly, it yields a fuzzy number, not an exact value.
Interesting. It’s almost like you are making the opposite argument you are trying to make.
Also, sort of thought the idea of a discrete limit h (and or b) functioning like the speed limit of light was a key to @RUTA block world reconciliation of QM and GR.

I agree with him and Pontryagin. Calculus is a tool that depends on flying elephants. Still a good tool.
 
Last edited:
  • #87
A. Neumaier said:
If you work with discrete space and discrete time only, do you scrap all conservation laws?
One possibility to see it from an agent perspective:

Instead of seeing it so that you just "scrap" all the continuum symmetries, and loose powerful constraints and deductive power, one might instead see it so from an agent perspective, that we reject the continuum symmetries as valid intrinsic rules of reasoning, on the grounds that they are typically not inferrable with 100% confidence by a finite agent (because ones seeks an instrinsic reconstruction).

Instead one could embrace the inferrable but subjective and approximate symmetries and see they as guiding the actions in the evolving context, as the agents are interacting. One then replaces the constraint notion of observer equivalence with just observer democracy, and are then facing the problem of showing that the democratic process is compatible with an equivalence asymptotically in the relevant cases, and hopefully also explain WHY the symmetries ARE spontaneously emergent, but also WHY they are not perfect? IF that suceeds (which may not be the case of course), then it seems we have "added value" and insight. This is as I see one possible rationale for the thinking.

/Fredrik
 
  • #88
Jimster41 said:
Calculus is a tool that depends on flying elephants.
Can you explain?
 
  • #89
Demystifier said:
Can you explain?
my understanding of calculus is that it depends on the rules of convergence at infinite limits. I like the convergence part. I don’t like the introduction of something as as bizarre as “infinity” it’s like fundamentally undefinable except in the utterly abstract sense. Always found that frustrating.
 
Last edited:
  • Skeptical
  • Like
Likes PeroK, weirdoguy and Demystifier
  • #90
Difficulty with Infinity as “just another math widget” aside, I wonder if continuum assumptions are a barrier to answers.

“Evolutionary Dynamics” by Novak, specifically the phenomenon of evolutionary drift as an analogue of spontaneous symmetry breaking left me wondering if evolution as a phenomenon isn’t a clue to the answer of whether the limit is continuous.

Not my ideas: got them esp from Tannoudji’s “Universal Constants in Physics” and Chaisson’s “The Life Era” among others. Most recently I was excited to see @RUTA ’s modest proposal that such constants along with c sort of dictate (describe for us) the finite constraints of the lattice. Still working through his book.
 
Last edited:

Similar threads

  • · Replies 87 ·
3
Replies
87
Views
8K
  • · Replies 6 ·
Replies
6
Views
4K
  • · Replies 69 ·
3
Replies
69
Views
7K
  • · Replies 6 ·
Replies
6
Views
3K
  • · Replies 3 ·
Replies
3
Views
2K
  • · Replies 242 ·
9
Replies
242
Views
25K
  • · Replies 140 ·
5
Replies
140
Views
12K
  • · Replies 5 ·
Replies
5
Views
2K
  • · Replies 15 ·
Replies
15
Views
2K
  • · Replies 6 ·
Replies
6
Views
2K