## Impact of Gödel's incompleteness theorems on a TOE

 Quote by D H Fredrik, you are assuming the universe has to be computable.
I didn't understand how you went from computable expectations (which is what I mean) to computable universe?

What I admittedly assume - on the grounds that it's the most rational assumption I can see; and hence justified working hypothesis - is that a generic observer, that is about to form an action, does so by a form of computation based on available evidence. Anything else simply makes my brain throw in the towel. That doesn't make it right though. But I have good confidence in this view.

But "computable universe" I'm not even sure exactly what you mean? Can you explain, then I can see if I agree.

/Fredrik

 Quote by S.Daedalus Not sure if I get understand this right -- I mean, basically, you'd want your theory to be computable because you want to compute things with it, no? If you start with a system in state x and your theory tells you that its evolution is given by f(x) where f is not a computable function, what are you gonna do? Ask some old hag with a crystal ball? Or did you mean something else?
Yes. I meant why should one consider physical processes as the effect of computations of some kind of computer.. reduce everything to information processing.... bring in Turing machines etc...there seems no clear reason for that. That's also what Stephen Wolfram tried to argue a few years ago, in his way, but to no avail; he took his cellular automata a bit too literal.

Recognitions:
 Quote by D H Fredrik, you are assuming the universe has to be computable. There is a big difference between saying that the universe obeys a set of differential equations versus saying it is computable.
I'm not so sure. I mean, if we take the simple statement that if x is the configuration now, and f(x) is the configuration at some later time, then we can "compute" it simply by setting up the configuration x and then measuring it at the later time. In effect, the universe must have some means of "computing" the later configuration.

So it's obviously true that if we take "computability" to simply mean, "it is possible to construct a system that computes the result," then the universe must necessarily be computable in this sense. However, the question then arises as to whether or not this definition of computability meshes with, say, the definition of computability in the sense of a Turing machine.

Now, this runs into obvious difficulties in that computability is generally defined in terms of a very specific sort of computer, such as a Turing machine, but I have a strong suspicion that things like the uncountability of the real numbers will mess up any definition of computability.

 Quote by suprised Yes. I meant why should one consider physical processes as the effect of computations of some kind of computer.. reduce everything to information processing.... bring in Turing machines etc...there seems no clear reason for that. That's also what Stephen Wolfram tried to argue a few years ago, in his way, but to no avail; he took his cellular automata a bit too literal.
However, I think he's right in stressing the notion of computational equivalence. Basically, anything I can describe by a computable theory -- or equivalently, anything I can construct an arbitrarily exact computer simulation of -- is equivalent to whatever one might have in mind in a narrower sense when one speaks about a 'computer'. This is the same equivalence as between Turing machines, lambda calculus, partial recursive functions etc., at least in spirit: whatever one can compute, the other can, too.

Or, perhaps from another perspective, if we can build computers in reality, and can in turn model reality on computers, then in that sense reality (or the universe or whatever enveloping term one might prefer) is equivalent to -- or simply is, for short -- a computer.

This is conjectural and unprovable in the same sense as the Church-Turing thesis: there might be a framework of computation strictly more powerful than Turing machines et al, and similarly, it might be the case that reality can't be completely encompassed by some von Neumann architecture computer -- but I'd argue that it's the more parsimonious assumption to expect this not to be the case, and after all, I have a hard time seeing how one would describe a non-computable universe in an intelligible way; I strongly suspect our brains, in the end, to be describable in terms not too different from classical computation, and certainly, everything we can write down falls well within that paradigm -- so if nothing else, should the universe turn out not to be computable, we'd have to rethink the notion of communicating ideas via papers in journals.

 Quote by Chalnoth However, the question then arises as to whether or not this definition of computability meshes with, say, the definition of computability in the sense of a Turing machine.
I agree with this. I don't think that turings computability model is necessarily the one that is most useful to us here (from the point of view of physicists trying to understand interactions and evolution in terms of some kind of computation.)

The two most obvious objections are hte notion of unbounded tapes and possibly infinite processing time. (*)

 Quote by Surprised Yes. I meant why should one consider physical processes as the effect of computations of some kind of computer.. reduce everything to information processing.
What I have in mind is that the observer is it's own "the computer", and that computing is IMO mainly analogous to "rational action". So to understand how a system evolves, is pretty much the same as understanding how and why it makes agiven computation. The question is how to describe this computation. I never suggested that turing machine right out the book is the right abstraction. I doubt it.

As to why is the right path I suppose this boils down to confidence built up on subjective expereince and intuition. I feel pretty convinced, but as someone said in an earlier thread is that this is all poetry until someone can make this fly. And I agree. But this applies to any other research program too, like strings.

I see parallells also to the entropic research programs as, the way I see it, entropic gradients is what drives each computer, but I don't think the computations are deductive, they are inductive and still contain some randomness.

(*) I see an analogy between unitary evolution as a SELF-computation where the computer somehow predicts it's own output and evolves, and each measurement "resets" the input and possibly also in some cases evolves the computer hardware. And here the computing is moot, as the comptuer hadrware and input are possible perturbed, therfor the result is whats in the registre at that time, so what is releveant is how much process that is beeing done until the next input arrives. So I think we see evolution of algorithms and computers so as to adapt to a balance between speed and accuracy. Sometimes a quick approximate prediction may be far more valuable to a player than a more accurate answer that simply arrives too late. Problems like this is what guides me in the search for the new "computability model" that might work well for physics abstractions. So probably new computability models may need to deloped as well. I see no reason we need to settle with turing machines.

/Fredrik

 Quote by D H There is a big difference between saying that the universe obeys a set of differential equations versus saying it is computable.
The difference becomes smaller is we do not have a computer with sufficiently memory and computing power, that can actually solve that equation without just getting a mess out of rounding errors, which often happens given sufficiently complexity.

Then, it would be much better to have an approximation of hte differential equation, that may be less accurate but a least computable.

Since I think we agree that what we discuss here, theories, have the purposes of producing predictions. If we have a nifty gigantic equations, where there is no way to - with available resources - compute the predictions in an accurate an unambigous way, then we simply don't have a predictions.

This is what I'd even claim that the TOE will be computer dependent. There is no TOE that is compatible and executable on an arbitrary computer. Here I mean computable in a timely manner! Noone has use of a computer that can make the computation in infinite time, as the player is dead befor he has even produced a reaction..

/Fredrik
 Some responses so far have focused on people possible beeing wrong of what turing computable means, but I'm not talking about turing machines. I'm LOOKING for the right computability abstraction, and I just describe from understanding and personal intuition what I think the traits must be - IF we accept this computing-thinking. I know turing machines have unbounded tapes etc... this is what I'm not talking about turing machines. /Fredrik

 Quote by D H Fredrik, you are assuming the universe has to be computable. There is a big difference between saying that the universe obeys a set of differential equations versus saying it is computable.
This is what I tried to express. The problems arises in complex dynamical systems, eg. While the dynamics might be governed by simple laws, say non-linear differentrial equations, there is often no way to predict what the result would be like. Because initial conditions can never be known to arbitrary precision, while infinitesimal variations can influence the result macroscopically. Just intrinsic thermal or quantum mechanical fluctuations can flip a system into another phase, etc.

Eg one can imagine to build a trigger for an atomic bomb which goes off or not depending on the properties (time, direction) of radioactive decay products of a given single atom. Is there a way to compute with any sort of machine whether the city still exists tomorrow or not? I don't think so.

Similarly, many things we see in nature, are just frozen accidents of history and there is no particular reason for them to be like that rather then a bit (or very) different. I guess the concept of computability makes no real sense here.

 Quote by Fra Then, it would be much better to have an approximation of hte differential equation, that may be less accurate but a least computable.
This is precisely what I doubt for the reasons explained above.
 Thanks Surprise, we're getting closer. You use almost my arguments for a different conclusion :) I think I see now, the difference is that I think we simply have quite different ideas of what a theory is, and what's the purpose of a theory; which is the core of my argument. About chaotical dynamical systems you're right I fully agree. but your conclusion is different than mine. But this merely is an argument against reductionism. I'll type more later and try to explain what I eman /Fredrik
 Mentor I personally don't hold much sway with digital physics. Where is the computer that computes the interaction between a photon and an electron or that determines when a gold-198 nucleus emits a beta particle? Saying that the universe is computable (a la computable functions) isn't parsimony or reductionism. It is a deus ex machina.

 Quote by D H Where is the computer that computes the interaction between a photon and an electron or that determines when a gold-198 nucleus emits a beta particle?
That's a relevant question I agree. But to defend the idea without having definite answer, I'd say that answer depends on the observer but, it's the observer that is the computer (abstractly), and the computer itself evolves.

The most interesting perspective is when you consider the observer to be part of the interactions, and here I think that matter, and the microstructure of matter IS the computer if you draw the analogy like

input -> computation -> output
perturbation -> internal reaction -> reaction

But both the computation algorithm and the perturbed systems evolves and learns. So the computer and it's algorithm "improves" the more it interacts. Equilibrium conditions simply means that input is consistent with output, and that the algortihm is stable.

I think if we can classify, such algorithms corresponding to steady states, we may (this is my conjecture) find a one-2-one mapping of the action such "computing players" with the actions of abundant systems in nature, such as elememtary particles atoms etc.

So the logic is I think clear. The question is why would anyone have confidence in this? Is there good reasons to think that this will be fruitful? I think so, but that's just me,

/Fredrik

 Quote by Fra perturbation -> internal reaction -> reaction
The predictive value of this idea, is that I expect, by respecting computational and complexity bounds produce constraints on the sets of possible action functions. And this will have interesting scaling properties with complexity (memory size); that I associate closely with the energy scale that converge to something possibly unique as complexity -> zero.

/Fredrik
 Mentor Moderation comment: I had an OCD moment and got sick of seeing "Godel's incompletenss theorem" misspelled in the title and hence in every response. I changed the title of the thread to make the spelling correct and to distinguish this thread from the other ongoing thread on Gödel's theorems from a purely mathematical perspective. However, I did not have a CDO moment. I feel no compulsion to change the "Re: Godel's incompletenss theorem" in the title bars in every post of this thread. (CDO = OCD taken to the extent that the letters have to put in proper alphabetic order.) I hope the change in the title isn't too confusing here.

 Quote by Hurkyl There is a class Preline which is a subclass of Point x Point of elements satisfying:Preline(x,y) = x != yand the class Line is the quotient of Preline by the relation:If (x,y) and (u,v) are in the class Preline, $(x,y) =_{Line} (u,v)$ iff Collinear(x,y,u) and Collinear(x,y,v)and the incidence relation "lies on" on Point x Line given byIf (x,y) is in the class Line and z is in the class Point, z lies on (x,y) iff Collinear(x,y,z)(and this is a well-defined relation, because it respects =Line) No set theory involved, just abstraction.
Tarksi's system doesn't contain quantification over classes or types, doesn't contain quantification over the cartesian product of classes or types and doesn't contain an abstraction operator, and the existence of the things you mention in your definition do not follow as a matter of first order logic from the Tarksi's axioms.

The existence of lines, planes and various regions is not a first order consequence of Tarski's axioms. When you add enough further existence axioms (first order, if you wish, in terms of schemas) to give such region, number theory *is* embeddable in the system and the Godel construction goes through.

Recognitions: