I The Universe is No Simulation

  • I
  • Thread starter Thread starter fresh_42
  • Start date Start date
fresh_42
Staff Emeritus
Science Advisor
Homework Helper
Insights Author
2024 Award
Messages
20,764
Reaction score
28,213
TL;DR
The authors invoke Gödel and Tarski and investigate the algorithmic nature of the universe. A philosophical play with words far from reality, or a serious contribution to the possibility of a holographic universe?
I came across the following paper by Mir Faizal, Lawrence M Krauss, Arshid Shabir, and Francesco Marino from BC.

Consequences of Undecidability in Physics on the Theory of Everything​


Abstract

General relativity treats spacetime as dynamical and exhibits its breakdown at singularities‎. ‎This failure is interpreted as evidence that quantum gravity is not a theory formulated {within} spacetime; instead‎, ‎it must explain the very {emergence} of spacetime from deeper quantum degrees of freedom‎, ‎thereby resolving singularities‎. ‎Quantum gravity is therefore envisaged as an axiomatic structure‎, ‎and algorithmic calculations acting on these axioms are expected to generate spacetime‎. ‎However‎, ‎Gödel’s incompleteness theorems‎, ‎Tarski’s undefinability theorem‎, ‎and Chaitin’s information-theoretic incompleteness establish intrinsic limits on any such algorithmic program‎. ‎Together‎, ‎these results imply that a wholly algorithmic “Theory of Everything’’ is impossible‎: ‎certain facets of reality will remain computationally undecidable and can be accessed only through non-algorithmic understanding‎. ‎We formalize this by constructing a “Meta-Theory of Everything’’ grounded in non-algorithmic understanding‎, ‎showing how it can account for undecidable phenomena and demonstrating that the breakdown of computational descriptions of nature does not entail a breakdown of science‎. ‎Because any putative simulation of the universe would itself be algorithmic‎, ‎this framework also implies that the universe cannot be a simulation‎.

Source: https://jhap.du.ac.ir/article_488.html

Comment: Judging the seriousness of this paper is beyond my capabilities. My first thought, therefore, was a philosophical one: how can the simulation know that it is no simulation? It appeared to me that the authors tripped over their own self-reference with Gödel and Tarski.
 
  • Like
Likes Greg Bernhardt
Physics news on Phys.org
Interesting topic, I will look into this and comment back. My first hunch is that the conclusion is plausible. But I'm cureious about their arguments... will read and get back.

/Fredrik
 
fresh_42 said:
TL;DR Summary: The authors invoke Gödel and Tarski and investigate the algorithmic nature of the universe. A philosophical play with words far from reality, or a serious contribution to the possibility of a holographic universe?

how can the simulation know that it is no simulation?
Is that a variation of the Turing Halt problem?

Is that what the authors are attempting to discuss, ie that the universe cannot be run on a finite Turing machine? since some mathematical algorithms are not computable.

You may be correct that their interpretation is self-referencing and perhaps not sound in that they conclude that they have found a Halt program for the universe.

That's about all I have to say.
I take it as fact that you know more about this than I do, and probably does not deserve a reply, unless it is to say that I am totally out to lunch.
 
I do not see anything in their argument that actually relies on the topic being quantum gravity. They could have written a paper, "Consequences of Undecidability for the Theory of Packing a Lunchbox", with the exact same structure.
 
  • Like
Likes fresh_42, Demystifier and PeroK
The paper is silly, to put it mildly. The crux of their argument is that the set of axioms is arithmetically expressive and therefore Godel's incompleteness theorems apply. That's silly because there is no reason for a computer program running the simulation to be arithmetically expressive. The computer, of course, performs arithmetic computations, but for that purpose it does not need to prove theorems of arithmetics derived from Peano axioms, or in other words, it does not need to be arithmetically expressive. The Peano axioms are expressed in first order logic, containing the quantifiers "for all" and "exists", while computations on a computer use only a much more primitive logic called Boolean algebra. Sure, you can, if you want, use a computer to prove theorems expressed in first order logic, but the point is that you don't need to do that if you only want to simulate physical systems on a computer. Essentially, in a simulation, you just run the universe for one particular initial condition during a finite time, you do not prove general theorems valid for the set of all possible initial conditions during arbitrarily long times.

I hope the paper is just a Sokal-like hoax.
 
  • Like
Likes PeterDonis, fresh_42 and PeroK
Demystifier said:
The paper is silly, to put it mildly. The crux of their argument is that the set of axioms is arithmetically expressive and therefore Godel's incompleteness theorems apply. That's silly because there is no reason for a computer program running the simulation to be arithmetically expressive. The computer, of course, performs arithmetic computations, but for that purpose it does not need to prove theorems of arithmetics derived from Peano axioms, or in other words, it does not need to be arithmetically expressive. The Peano axioms are expressed in first order logic, containing the quantifiers "for all" and "exists", while computations on a computer use only a much more primitive logic called Boolean algebra. Sure, you can, if you want, use a computer to prove theorems expressed in first order logic, but the point is that you don't need to do that if you only want to simulate physical systems on a computer. Essentially, in a simulation, you just run the universe for one particular initial condition during a finite time, you do not prove general theorems valid for the set of all possible initial conditions during arbitrarily long times.

I hope the paper is just a Sokal-like hoax.
The classical computer is binary. As for Boolean algebra, it seems there are several Boolean algebras; in Machover and Bell there's a full chapter on them.
 
As for Sokal-like hoax I don't really know. I mean Bogdanov brothers got their doctor diplomas for a nonsense-type of publication. So who knows?
 
I read more, not sure from what conceptual background the authors come, but it seems via a detour their main conclusion/insight is simply that we need more than formal deductive systems to find a theory of everthing.

I personally didn't need this paper to realise this, although the association to deductive processes is an conceptually interesting analogy as it weakly relates to the limitations of "System dynamics" already pointed out also by Lee Smolin in his tons of papers and books on the topic of "evolution of law, reality of time" etc. But Smolin uses the term "newtonian schema" instead of sytem dynamics. His precise argument here is WHY it does not work.

IMHO, what seem to prevents this when you combine QM and GR is that, each of them are cast as system dynamics models. But when they couple - ie the spacetime which is required to define the system dynamicsin QM/QFT, is a variable in GR. So when you comple them, it seems to be impossible to treat this as one simulation in one state space. You rather get coupled independent algorithms. Each quantum system runs its own "simulation" and at the same time the whole universe runs its own "simulation". When they couple, it seems the coupled evolution is so non-trivial that the system dynamics paradigm breaks down. What happens is traces out a too large state space for a single deductive simulation. To do like is tradition, to EMBEDD this in a yet LARGER state space is a temporary fix and, leads to just inflating problems of fine tuning.

In the end they conclude

"The arguments presented here suggest that neither ‘its’ nor ‘bits’ may be sufficient to describe reality. Rather, a deeper description, expressed not in terms of information but in terms of non-algorithmic understanding, is required for a complete and consistent theory of everything."

The interesting part, that i hoped for more is, what "non-algorithmic understanding" do they suggest? I interpret this as "non-deductive understanding" and the obvious generalization is general inference, of which deductive logic is special case only.

They conclude this special case is insufficient (which i think is no news) but what do we do instead??
Not sure if they come up with any new ideas? I didnt spot it.

/Fredrik