Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

I Proving The Church-Turing-Deutsch Principle

  1. Nov 23, 2016 #1


    User Avatar

    How does one prove the Church-Turing-Deutsch Principle?

    The principle states that a universal computing device can simulate every physical process.

    Not sure where to post this, as it seems like a pertinent question in proving the simulated universe thesis right or wrong along with possible justification in support of Everettian QM. I also don't know how to reconcile the Church-Turing-Deutsch Principle with Gödel's incompleteness theorems.

    Any input greatly appreciated alleviating my confusion.
  2. jcsd
  3. Nov 23, 2016 #2
    One doesn't prove it. That's usually true of a "principle", it's assumed and used to derive actual physics. If those implications are always correct, after a while the principle becomes accepted, but never proved. The so-called "Church-Turing-Deutsch Principle" could possibly be "proved" someday, not now.

    However the Church-Turing thesis has been proved, for various classes of algorithms. What Deutsch did was suppose that physics would fit into those classes of algorithms. To the extent that turns out to be true, his "principle" will turn out to be correct.

    Deutsch is mainly interested in Quantum Computing and the principle is meant, at least in some version, to apply to that. Since they don't work yet (and may never) that puts the principle even farther from satisfactory validation.

    Gödel's incompleteness theorem applies in logic, to a class of paradoxes similar to the famous Cretan or Epimenides' ("I am lying") paradox. No example of such has come up in physics.

    BTW "the simulated universe thesis" is not allowed here on PF. Yes, I know it's incredibly ... complain to the management. I don't make the rules just follow them.
  4. Nov 24, 2016 #3


    User Avatar
    Science Advisor

    If physical laws are based on real numbers (or some other continuous number system), isn't it obvious that a universal computing device, based on integer (or countable) number system, cannot simulate nature exactly?
  5. Nov 24, 2016 #4
    Depends on what you mean by "exactly". I see no reason why it couldn't be simulated to arbitrary precision.
  6. Nov 24, 2016 #5


    User Avatar

    You can see the issue with the contradiction there.
    This is an issue because given any sufficiently sophisticated universal computing device there will be "truths" or what can be called manifest physical laws (through mathematics, e.g in Hilbert Space) that can't be proven to be true. This is essentially putting a thorn via Godel's Incompleteness Theorems into the validity of the Church-Turing-Deutsch Principle. I don't know if you see the link there yet or if I haven't made the causal link sufficiently clear. If you want to take this line of reasoning as far as possible, then this conundrum extends all the way to ANY physical law, in that we can never be certain of it being true in all circumstances. Even in a deterministic universe via Everttian QM, we could have a computer that will never be able to tell us that every Entscheidungsproblem will be able to be resolved in a deterministic manner.
    Last edited: Nov 24, 2016
  7. Nov 24, 2016 #6


    User Avatar
    Science Advisor

    Perhaps the discussion is most fruitful when confined to classical physics. And instead of the incompleteness theorem, perhaps one may simplify to thinking about the halting problem.

    Take a look at http://michaelnielsen.org/blog/interesting-problems-the-church-turing-deutsch-principle/

    "We’ve stated the CTD Principle in a form which is independent of any particular physical theory. This means that given any physical theory we can ask whether or not the CTD Principle is true in that particular theory? That is, within the scope of that theory is it possible to construct a universal computing device that can simulate an arbitrary physical system, also within that theory?

    For Newtonian physics, the answer seems to be “yes”. In 1982 Fredkin and Toffoli put forward a “billiard ball” model of computation, in which computations can be performed using colliding billiard balls – that is, using just the principles of Newtonian physics. Fredkin and Toffoli showed that such a model could simulate conventional computers like those Turing had proposed.

    In turn, it is widely believed that Turing machines are sufficient to simulate all of Newtonian physics. Some detailed investigations of this question have been done by Pour-El and Richards; with some caveats, my understanding is that their work shows that Turing machines can simulate all of Newtonian physics. (I should say that my understanding of their work is rather superficial, and I have not worked through all the relevant results in detail.)"
  8. Nov 24, 2016 #7
    I agree with @Heinera. But it's worth mentioning that's not how David Deutsch would respond. He proposes that a quantum computer is capable of providing "infinite" resolution - if, indeed, that's necessary in physics - because it performs computations in all the many worlds at once. Take it for what it's worth.

    The second "proved" meant "accepted" as defined in the previous sentence, so it's in quotes. Yes, it's prima facie contradictory; that's exactly why I thought the intended meaning would be discernible.

    Gödel's says that in any sufficiently complex logic system (which includes arithmetic), it will be possible to write an undecidable sentence. But it doesn't say that any application of it, no matter how sophisticated, must produce such a sentence. It's certainly possible that a complete description of physical laws would avoid such.

    No physical law or theory, at all, is decidable, or provable - as Popper and common sense show. Only accomplished (i.e. past) experimental results are thoroughly "true" in physics. This "conundrum extends all the way to any physical law" regardless of Godel.

    There's no real point to "putting a thorn into the validity" of CTD since it has no particular validity at the moment. It's just a guess. It seems you're looking for a logical contradiction in a principle which can not, yet, even be considered logical.

    There's no reason to expect a simulation of physics to engender, much less solve, every decision problem.

    My overall point: Physics and formal logic are apples and oranges which you're trying to mix. Undoubtedly, I could be wrong.
  9. Nov 24, 2016 #8


    User Avatar

    So, what I am getting at is what you can ask.

    My main point is that how can we know for certain that the MWI is actual/real/valid/.../true if the only practical means of verifying it is via trying to simulate the laws of the universe via the Church-Turing-Deutsch Principle, which itself can't be known to be true?

    What's even more damning is that Godel showed that even if the Church-Turing-Deutsch Principle is by some means true, then even if one were to create such a sophisticated logical Turing Machine, then even then we would not be able to know whether halting problems (physical phenomena taking place within such a machine) are deterministic (via computational means) or not.

    It's kinda like standing on a rug and pulling at it at the same time.
  10. Nov 24, 2016 #9


    User Avatar
    Science Advisor

    MWI is not strictly related to the Church-Turing-Deutsch Principle. As you can see in Nielsen's discussion, CDT can be discussed in the context of classical mechanics.

    Also, as far as we know, there is nothing that a quantum computer can do that a classical computer cannot do - the only difference is that a quantum computer can do some things faster.
  11. Nov 24, 2016 #10


    User Avatar

    I'm going to try and simplify my question to a more simple one pertaining as to whether Godel's Incompleteness Theorems negates the possibility of constructing a universal Turing machine that would be capable of computing all known physical laws. Or am I running in circles in trying to state that all physical laws can be proven to be true (computable or replicable?). I mean, the act of a computer able enough to simulate them, would be sufficient evidence despite not being able to verify them from within such a system.

    If anyone want's to take a stab at it here, then by all means.
  12. Nov 24, 2016 #11
    Remember we can never "know for certain" that MWI, or any other physical theory, is True. Tomorrow water might boil at 101 degrees C, invalidating thermodynamics and chemistry (I don't expect it though :-) If you haven't read Popper perhaps you should; but the idea is just common sense. Physical theories are in danger of being falsified every time we do another experiment. They can never be accepted as "True" with a capital T. So we're never going to find an absolute logical proof of MWI - and that's so with or without Godel. However it may be made so plausible that everyone accepts it.

    BTW, full disclosure: I don't "believe" MWI, but admit it's possible. Indeed, I don't think it can be disproven any more than it can be proven.

    Now, Deutsch has made these two statements:

    1 - the CTD conjecture: a computer (quantum, in his view) is capable of simulating exactly all physical processes.

    2 - If / when Quantum Computers actually work, that will "prove" MWI. His reasoning: in a sophisticated QC algorithm, essentially infinite computing power is involved. "Where" is all that computing taking place? Answer: in those infinitely many worlds.

    These two statements are related and compatible but each can stand alone. When you say CTD is a "practical means of verifying" MWI I suspect you're conflating them.

    Statement 2 is reasonable. If QC's work as well as some hope, their capabilities will be so amazing that it will make MWI seem sensible. But there will always be alternative explanations.

    Statement 1 also is not unreasonable. It's clearly somewhat true: we can definitely simulate most (if not all) physical processes. And it can be done with regular digital computers, although quantum computers will do a much better job in many cases (if they ever work). But - to reiterate - it can never be proven "absolutely".

    Bottom line, I think your difficulty originates in looking for absolute proof where it is not, and never will be, available.
  13. Nov 24, 2016 #12


    User Avatar

    Well, I've always been more of a logical positivist despite being familiar with Popper and his doctrine of fallibility.

    I contest the notion that logic and physics are two mutually independent things. I mean, the Church-Turing-Deutsch Principle hinges on these logical machines that can replicate physical laws. If a physical law can't be replicated then it will be by Godel's hand, hehe.
  14. Nov 24, 2016 #13


    User Avatar
    Science Advisor

    Well, how about a simpler question - instead of the incompleteness theorem, why not consider the halting theorem (which is related to the incompleteness theorem). Does the halting theorem prevent the construction of a Turing machine that simulates a world governed by classical mechanics?
  15. Nov 24, 2016 #14
    Halting theorem is definitely more relevant since it's basically Godel's theorem for computer programs - and that's what we're talking about. It says that for reasonably general and complex programs - which physics simulator would be - you can't always guarantee it will halt, having reached a solution.

    The only way a non-halting program can ever happen is some form of infinite loop. That happens with any (sufficiently complex) program every now and then. Windows freezes and you have to reboot: that means an infinite loop popping up somewhere.

    We've developed ways to handle the problem. In modern multi-tasking a process gets a time slice, 5 or 10 ms; then it's interrupted and control given to next in the scheduling queue (of course priorities are involved). That simple approach could cure the halting problem. First, notice that the scheduler is simple enough, with a very targeted mission, to be made bullet-proof. IOW it ought to always work. Now, each process could be allotted a "maximum time". If it exceeds that, it's removed. In this way we ought to be able to guarantee no infinite loops, no matter how vulnerable an individual process might be.

    In terms of a physics simulator, infinite - or at least very long - loops are very much expected. For instance suppose you want to simulate, at a basic level, the gas in a container, to compute pressure on the walls (or whatever). The obvious way is to loop over every molecule. Update the position by a tiny time step (picosecond or so) and see it if hits the wall. Add the resulting impact's contribution to your accumulating sum. Get to the end of the loop, that sum is your calculated momentum (pressure). How long will the loop take? We could easily construct an example where, on the super-est supercomputer available today, it would take a billion years. That's too long, even if it's not infinite. So our physics simulator is going to require a clever way to approximate the final result - which is called "mathematics". E.g. Maxwell-Boltzmann equation etc.

    Essentially this is the reason why math is "the language of physics" - because they didn't have computers back then. If they had, math would be nowhere near as important. You can view the whole point of math (especially integration) in physics as, precisely, the solution to loops that are too long to simulate.

    So my point is that any comprehensive physics simulator will have to constantly be able to detect, abort, and provide an approximate answer for, the inevitable too-long loops that will arise. This mechanism will automatically put a halt to the halting problem, as well.

    Note: you've never heard this idea before, so the natural reaction is to simply ignore it. This note is included to make you think twice.
  16. Nov 25, 2016 #15


    User Avatar
    Science Advisor

    Is there some reference where Deutch claims that? I would say that a quantum computer based on qubits (or any other countable basis) can do exactly the same as classical computers can. Yes, the parallelism due to "many worlds" can speed up some calculations, but as long as the number of "worlds" is also countable, I don't see how could quantum computers transcend the countability barrier.
  17. Nov 25, 2016 #16
    I believe it's a fair statement of his view. I put "infinite" in quotes to soften it a bit. Here's one quote I found on Scott Aaronson's blog:

    Deutsch from 1997 book “The Fabric of Reality”:

    Logically, the possibility of complex quantum computations adds nothing to a case [for the Many Worlds Interpretation] that is already unanswerable. But it does add psychological impact. With Shor’s algorithm, the argument has been writ very large. To those who still cling to a single-universe world-view, I issue this challenge: explain how Shor’s algorithm works. I do not merely mean predict that it will work, which is merely a matter of solving a few uncontroversial equations. I mean provide an explanation. When Shor’s algorithm has factorized a number, using 10500 or so times the computational resources that can be seen to be present, where was the number factorized? There are only about 1080 atoms in the entire visible universe, an utterly minuscule number compared with 10500. So if the visible universe were the extent of physical reality, physical reality would not even remotely contain the resources required to factorize such a large number. Who did factorize it, then? How, and where, was the computation performed?

    He was more explicit in "The Beginning of Infinity" where he argues specifically that infinity exists as such. I'm quite sure he says we'll be able to simulate physical processes exactly, but can't find a quote.

    That's not enough to justify my paraphrase of his views, I admit. I just spent 15 minutes wading through a lot of pop-sci about MWI, QC, and how bold a visionary genius DD is, looking for a good quote. Can't take any more; would rather be branded a mis-paraphraser, or water-boarded.
  18. Nov 25, 2016 #17


    User Avatar
    Gold Member

    Anyone who has done numerical integrations of simplectic processes has found the rather obvious rule that energy conservation only becomes exact in the limit as the time step ##\delta t \rightarrow 0##. This is not possible in a finite time. If there is a 'minimum time' step in the universe, then energy conservation is never exact.

    The universe is more like an analogue computer with faster than light internal communication.
    Last edited: Nov 25, 2016
  19. Nov 25, 2016 #18


    User Avatar

    I have a small question.

    If all the laws of physics can be computed, then doesn't that presuppose that logic is at least synonymous or at least as important as physics is. Or rather that physics relies on the laws of logic?

    I always had a problem with understanding the importance of physics, mathematics, and logic and which of them follows from the rest. I am a platonist for the matter and subscribe to Tegmark's views on the multiverse.
  20. Nov 25, 2016 #19
    As far as anyone knows there is no minimum time step, as such, in the universe - but there is a minimum action step, Planck's constant. So if the energy of an object is constant, in a given scenario, its "minimum time step" would be t = h / E. That's naïve, there are a number of details to consider, but perhaps it can be made a rigorous statement. In a subatomic simulation, one can probably adjust the time step according to this idea so as to get exact answers without letting it go to 0. In fact if it worked, a smaller time step would no doubt give a wrong answer.

    From a practical, working-physicist point of view, logic comes first. That is, physics assumes logic and math as a necessary substrate.

    However from a philosophical point of view it's not that simple. The basic counter-argument goes: evolution provided us with logic/math ability; evolution is driven by the real world, i.e. physics; therefore physics comes first.

    The question is a can of worms, on this "deep" level. Fun to think about but no final answer is possible.
  21. Nov 25, 2016 #20
    That assumes that evolutionary theory somehow pre-empts or explains logic, philosophy and rational thought. Which means that all of these are then rationalised in terms of evolutionary advantage, like peacock's tails, or claws or tentacles. Which, in turn, is no more than an illustration of the fundamental inadequacy of 'biologism'. It's really not a 'philosophical point of view' at all, it is more the point of view that declares, with Dennett, philosophy is dissolved in 'Darwin's dangerous idea'.
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook

Have something to add?
Draft saved Draft deleted