Impact of Gödel's incompleteness theorems on a TOE

  • Thread starter Thread starter PhysDrew
  • Start date Start date
  • Tags Tags
    Impact Toe
  • #151


bcrowell said:
Well, I don't want to be unnecessarily argumentative, but it's very clear from your #99 that you did equate them.
Please believe me that I don't!

bcrowell said:
Any theory of physics can be expressed in a formalism that is equiconsistent to real analysis, ...

... the Schrodinger equation can be written using complex analysis, the formalism you need for the Schrodinger equation is equiconsistent with real analysis. That means that to a logician, there is no difference between the mathematical foundations needed for quantum mechanics and Newtonian mechanics.

...

This is why my objection to com.stoer's conflation of physical theories with axiomatic systems is not just a quibble. It's a crucial point.
Sorry; I didn't state what has to be done to convert a physical theory into an axiomatic system. There are two problems
- the "mathematical expressions" of physics are partially ill-defined (QFT)
- a physical theory is more than a mathematical system (*)

(*) in addition you need
1) rules how to ask questions
2) how to apply mathematics
3) and how to interpret the results!


(2) should be clear; let's look at (3) In QM / QFT we are interpreting certain solutions (eigenstates, plane waves, ...) as particles. If your formal system "produces" something like f(x) = exp(ikx) written on a tape then you still do not know what this means. This is step (3) - and w/o step (3) the whole idea is useless. You simply don't know what you system has calculated for you. You don't understand. In order to achieve that you need step (3) - and of course the same applies to (1)
 
Last edited:
Physics news on Phys.org
  • #152


It think it is a bit interesting to see everybodies associations to this topic. Which was after all what the OP asked for.

Since the real numbers came up again, and it's true that it's the number system of choice in physics I'll just throw out again that from the inferenece perspective, the place where real numbers typically enters the picture (if you make a full reconstruction) is either
- as the valuespace of degrees of plausability (in jaynes/cox style reconstruction of classic probability as king o extension to logic)
- as quantification functions of partially ordered sets (as some people work on, knuth's papers is the last ones I read)
- or in the measure theoretic axiomatic perspective it's just axiomatically introduced.

However, as I see it, the choice of uncountable number systems is not obvious. As I see it, if you consider Jaynes reconstruction, but add a slightly more computing-related approach that doesn't consider infinite information sinks and infinite processing times, then it seems to me that the KEY value of the number systems is to "count evidence" in the inference model. Now, is real numbers really a natural choice here? It's very easy to be seduced as I've been myself in the past that real numbers are "more general" than integers and therefore there seems to be no rational objection to postulating that we assign a real number [0,1] to each degree of plausability, or "evidence rating" or whatever.

But the real problem starts once we realize that then we suddently have uncountable and infinite sets of possibilities. And this makes computation problematic. It also makes COMBINING two sets of information in a rational way difficult, because how much information does a piece of the continumm really contain? Apparently the state spaces is uncountably infinite. So is this continuous index needed? It seems to cause problems, that we don't need. How do you even make sure to unambigously count all possibilities to add up to 1, when the set is uncountable? Sure we can often manage by defining integration measures, but there are problems here. Problems that get worse when the theory of computing expectations are developed further.

This problem is the same to both complex and real. So it has bearing on complex, quaterions or other higher multiples, it's about the physical basis or non-basis of the continuum ounting.

S.Daedalus said:
If you start with a system in state x and your theory tells you that its evolution is given by f(x) where f is not a computable function, what are you going to do?

I was thinking the same.

/Fredrik
 
  • #153


Fredrik, you are assuming the universe has to be computable. There is a big difference between saying that the universe obeys a set of differential equations versus saying it is computable.
 
  • #154


D H said:
Fredrik, you are assuming the universe has to be computable.

I didn't understand how you went from computable expectations (which is what I mean) to computable universe?

What I admittedly assume - on the grounds that it's the most rational assumption I can see; and hence justified working hypothesis - is that a generic observer, that is about to form an action, does so by a form of computation based on available evidence. Anything else simply makes my brain throw in the towel. That doesn't make it right though. But I have good confidence in this view.

But "computable universe" I'm not even sure exactly what you mean? Can you explain, then I can see if I agree.

/Fredrik
 
  • #155


S.Daedalus said:
Not sure if I get understand this right -- I mean, basically, you'd want your theory to be computable because you want to compute things with it, no? If you start with a system in state x and your theory tells you that its evolution is given by f(x) where f is not a computable function, what are you going to do? Ask some old hag with a crystal ball?

Or did you mean something else?

Yes. I meant why should one consider physical processes as the effect of computations of some kind of computer.. reduce everything to information processing... bring in Turing machines etc...there seems no clear reason for that. That's also what Stephen Wolfram tried to argue a few years ago, in his way, but to no avail; he took his cellular automata a bit too literal.
 
  • #156


D H said:
Fredrik, you are assuming the universe has to be computable. There is a big difference between saying that the universe obeys a set of differential equations versus saying it is computable.
I'm not so sure. I mean, if we take the simple statement that if x is the configuration now, and f(x) is the configuration at some later time, then we can "compute" it simply by setting up the configuration x and then measuring it at the later time. In effect, the universe must have some means of "computing" the later configuration.

So it's obviously true that if we take "computability" to simply mean, "it is possible to construct a system that computes the result," then the universe must necessarily be computable in this sense. However, the question then arises as to whether or not this definition of computability meshes with, say, the definition of computability in the sense of a Turing machine.

Now, this runs into obvious difficulties in that computability is generally defined in terms of a very specific sort of computer, such as a Turing machine, but I have a strong suspicion that things like the uncountability of the real numbers will mess up any definition of computability.
 
  • #157


suprised said:
Yes. I meant why should one consider physical processes as the effect of computations of some kind of computer.. reduce everything to information processing... bring in Turing machines etc...there seems no clear reason for that. That's also what Stephen Wolfram tried to argue a few years ago, in his way, but to no avail; he took his cellular automata a bit too literal.
However, I think he's right in stressing the notion of computational equivalence. Basically, anything I can describe by a computable theory -- or equivalently, anything I can construct an arbitrarily exact computer simulation of -- is equivalent to whatever one might have in mind in a narrower sense when one speaks about a 'computer'. This is the same equivalence as between Turing machines, lambda calculus, partial recursive functions etc., at least in spirit: whatever one can compute, the other can, too.

Or, perhaps from another perspective, if we can build computers in reality, and can in turn model reality on computers, then in that sense reality (or the universe or whatever enveloping term one might prefer) is equivalent to -- or simply is, for short -- a computer.

This is conjectural and unprovable in the same sense as the Church-Turing thesis: there might be a framework of computation strictly more powerful than Turing machines et al, and similarly, it might be the case that reality can't be completely encompassed by some von Neumann architecture computer -- but I'd argue that it's the more parsimonious assumption to expect this not to be the case, and after all, I have a hard time seeing how one would describe a non-computable universe in an intelligible way; I strongly suspect our brains, in the end, to be describable in terms not too different from classical computation, and certainly, everything we can write down falls well within that paradigm -- so if nothing else, should the universe turn out not to be computable, we'd have to rethink the notion of communicating ideas via papers in journals. :wink:
 
  • #158


Chalnoth said:
However, the question then arises as to whether or not this definition of computability meshes with, say, the definition of computability in the sense of a Turing machine.

I agree with this. I don't think that turings computability model is necessarily the one that is most useful to us here (from the point of view of physicists trying to understand interactions and evolution in terms of some kind of computation.)

The two most obvious objections are hte notion of unbounded tapes and possibly infinite processing time. (*)

Surprised said:
Yes. I meant why should one consider physical processes as the effect of computations of some kind of computer.. reduce everything to information processing.

What I have in mind is that the observer is it's own "the computer", and that computing is IMO mainly analogous to "rational action". So to understand how a system evolves, is pretty much the same as understanding how and why it makes agiven computation. The question is how to describe this computation. I never suggested that turing machine right out the book is the right abstraction. I doubt it.

As to why is the right path I suppose this boils down to confidence built up on subjective expereince and intuition. I feel pretty convinced, but as someone said in an earlier thread is that this is all poetry until someone can make this fly. And I agree. But this applies to any other research program too, like strings.

I see parallells also to the entropic research programs as, the way I see it, entropic gradients is what drives each computer, but I don't think the computations are deductive, they are inductive and still contain some randomness.

(*) I see an analogy between unitary evolution as a SELF-computation where the computer somehow predicts it's own output and evolves, and each measurement "resets" the input and possibly also in some cases evolves the computer hardware. And here the computing is moot, as the comptuer hadrware and input are possible perturbed, therfor the result is what's in the registre at that time, so what is releveant is how much process that is beeing done until the next input arrives. So I think we see evolution of algorithms and computers so as to adapt to a balance between speed and accuracy. Sometimes a quick approximate prediction may be far more valuable to a player than a more accurate answer that simply arrives too late. Problems like this is what guides me in the search for the new "computability model" that might work well for physics abstractions. So probably new computability models may need to deloped as well. I see no reason we need to settle with turing machines.

/Fredrik
 
  • #159


D H said:
There is a big difference between saying that the universe obeys a set of differential equations versus saying it is computable.

The difference becomes smaller is we do not have a computer with sufficiently memory and computing power, that can actually solve that equation without just getting a mess out of rounding errors, which often happens given sufficiently complexity.

Then, it would be much better to have an approximation of hte differential equation, that may be less accurate but a least computable.

Since I think we agree that what we discuss here, theories, have the purposes of producing predictions. If we have a nifty gigantic equations, where there is no way to - with available resources - compute the predictions in an accurate an unambigous way, then we simply don't have a predictions.

This is what I'd even claim that the TOE will be computer dependent. There is no TOE that is compatible and executable on an arbitrary computer. Here I mean computable in a timely manner! Noone has use of a computer that can make the computation in infinite time, as the player is dead befor he has even produced a reaction..

/Fredrik
 
  • #160


Some responses so far have focused on people possible beeing wrong of what turing computable means, but I'm not talking about turing machines. I'm LOOKING for the right computability abstraction, and I just describe from understanding and personal intuition what I think the traits must be - IF we accept this computing-thinking.

I know turing machines have unbounded tapes etc... this is what I'm not talking about turing machines.

/Fredrik
 
  • #161


D H said:
Fredrik, you are assuming the universe has to be computable. There is a big difference between saying that the universe obeys a set of differential equations versus saying it is computable.

This is what I tried to express. The problems arises in complex dynamical systems, eg. While the dynamics might be governed by simple laws, say non-linear differentrial equations, there is often no way to predict what the result would be like. Because initial conditions can never be known to arbitrary precision, while infinitesimal variations can influence the result macroscopically. Just intrinsic thermal or quantum mechanical fluctuations can flip a system into another phase, etc.

Eg one can imagine to build a trigger for an atomic bomb which goes off or not depending on the properties (time, direction) of radioactive decay products of a given single atom. Is there a way to compute with any sort of machine whether the city still exists tomorrow or not? I don't think so.

Similarly, many things we see in nature, are just frozen accidents of history and there is no particular reason for them to be like that rather then a bit (or very) different. I guess the concept of computability makes no real sense here.

Fra said:
Then, it would be much better to have an approximation of hte differential equation, that may be less accurate but a least computable.

This is precisely what I doubt for the reasons explained above.
 
  • #162


Thanks Surprise, we're getting closer. You use almost my arguments for a different conclusion :) I think I see now, the difference is that I think we simply have quite different ideas of what a theory is, and what's the purpose of a theory; which is the core of my argument.

About chaotical dynamical systems you're right I fully agree. but your conclusion is different than mine. But this merely is an argument against reductionism.

I'll type more later and try to explain what I eman

/Fredrik
 
  • #163


I personally don't hold much sway with digital physics. Where is the computer that computes the interaction between a photon and an electron or that determines when a gold-198 nucleus emits a beta particle? Saying that the universe is computable (a la computable functions) isn't parsimony or reductionism. It is a deus ex machina.
 
  • #164


D H said:
Where is the computer that computes the interaction between a photon and an electron or that determines when a gold-198 nucleus emits a beta particle?

That's a relevant question I agree. But to defend the idea without having definite answer, I'd say that answer depends on the observer but, it's the observer that is the computer (abstractly), and the computer itself evolves.

The most interesting perspective is when you consider the observer to be part of the interactions, and here I think that matter, and the microstructure of matter IS the computer if you draw the analogy like

input -> computation -> output
perturbation -> internal reaction -> reaction

But both the computation algorithm and the perturbed systems evolves and learns. So the computer and it's algorithm "improves" the more it interacts. Equilibrium conditions simply means that input is consistent with output, and that the algortihm is stable.

I think if we can classify, such algorithms corresponding to steady states, we may (this is my conjecture) find a one-2-one mapping of the action such "computing players" with the actions of abundant systems in nature, such as elememtary particles atoms etc.

So the logic is I think clear. The question is why would anyone have confidence in this? Is there good reasons to think that this will be fruitful? I think so, but that's just me,

/Fredrik
 
  • #165


Fra said:
perturbation -> internal reaction -> reaction

The predictive value of this idea, is that I expect, by respecting computational and complexity bounds produce constraints on the sets of possible action functions. And this will have interesting scaling properties with complexity (memory size); that I associate closely with the energy scale that converge to something possibly unique as complexity -> zero.

/Fredrik
 
  • #166
Moderation comment:[/color]

I had an OCD moment and got sick of seeing "Godel's incompletenss theorem" misspelled in the title and hence in every response. I changed the title of the thread to make the spelling correct and to distinguish this thread from the other ongoing thread on Gödel's theorems from a purely mathematical perspective.

However, I did not have a CDO moment. I feel no compulsion to change the "Re: Godel's incompletenss theorem" in the title bars in every post of this thread. (CDO = OCD taken to the extent that the letters have to put in proper alphabetic order.)

I hope the change in the title isn't too confusing here.
 
  • #167


Hurkyl said:
There is a class Preline which is a subclass of Point x Point of elements satisfying:
Preline(x,y) = x != y​
and the class Line is the quotient of Preline by the relation:
If (x,y) and (u,v) are in the class Preline, (x,y) =_{Line} (u,v) iff Collinear(x,y,u) and Collinear(x,y,v)​
and the incidence relation "lies on" on Point x Line given by
If (x,y) is in the class Line and z is in the class Point, z lies on (x,y) iff Collinear(x,y,z)​
(and this is a well-defined relation, because it respects =Line)

No set theory involved, just abstraction.

Tarksi's system doesn't contain quantification over classes or types, doesn't contain quantification over the cartesian product of classes or types and doesn't contain an abstraction operator, and the existence of the things you mention in your definition do not follow as a matter of first order logic from the Tarksi's axioms.

The existence of lines, planes and various regions is not a first order consequence of Tarski's axioms. When you add enough further existence axioms (first order, if you wish, in terms of schemas) to give such region, number theory *is* embeddable in the system and the Godel construction goes through.
 
  • #168


D H said:
Where is the computer that computes the interaction between a photon and ...
Of course this only makes sense if the universe -according to MU - is itself the computer (or the Turing machine).
 
  • #169


tom.stoer said:
Of course this only makes sense if the universe -according to MU - is itself the computer (or the Turing machine).

Physical systems are not necessarily realisations of Turing machines. Turing machines are only realized by certain kinds of physical systems. The universe doesn't care whether its processes are Turing-computable.
 
  • #170
Do you know what I mean by MU (the Mathematical Universe)?
 
  • #171
tom.stoer said:
Do you know what I mean by MU (the Mathematical Universe)?

I had understood it as a mathematical representation of the kinds of processes going on in the physical universe possibly involving continuous quantities themselves varying continuously according to partial differential equations. Though demonstrably representable mathematically, such systems do not necessarily correspond to Turing machines.

Apologies if I got you wrong.
 
  • #172


D H said:
I personally don't hold much sway with digital physics. Where is the computer that computes the interaction between a photon and an electron or that determines when a gold-198 nucleus emits a beta particle? Saying that the universe is computable (a la computable functions) isn't parsimony or reductionism. It is a deus ex machina.
In the absence of evidence for a hypercomputable entity, it's indeed more parsimonious to not assume its existence; and frankly, I don't think it's possible to produce such evidence -- as I said, you have to be an oracle to recognize an oracle.
 
  • #173
According to the MU hypothesis the universe is not represented by some axiomatic system / calculational machine / ..., it IS a calculational machine.
 
  • #174
tom.stoer said:
According to the MU hypothesis the universe is not represented by some axiomatic system / calculational machine / ..., it IS a calculational machine.

The is/represents distinction doesn't make any difference to the point I'm making.
 
  • #175
yossell said:
I had understood it as a mathematical representation of the kinds of processes going on in the physical universe possibly involving continuous quantities themselves varying continuously according to partial differential equations. Though demonstrably representable mathematically, such systems do not necessarily correspond to Turing machines.

Apologies if I got you wrong.
Well, as I argued earlier, whatever the universe actually does must be representable by some definition of computation. Whether or not that definition is the same as Turing computability is another question, but I strongly suspect it is.
 
  • #176
S.Daedalus said:
In the absence of evidence for a hypercomputable entity, it's indeed more parsimonious to not assume its existence; and frankly, I don't think it's possible to produce such evidence -- as I said, you have to be an oracle to recognize an oracle.
That is not how science works. The burden is not upon science to disprove some radical concept. The burden is upon you as the proponent of some concept to offer proof of the validity of that concept.

That said, there have been a growing number of papers in the last ten years or so that question the applicability of the Church-Turing thesis to physical reality and that discuss the concept of hypercomputation with respect to the physical universe. Here are just a couple; I'll dig some more up over the weekend.

Hajnal Andréka, István Németi and Péter Németi, "General relativistic hypercomputing and foundation of mathematics", Natural Computing, 8:3 499-516 (2009)
http://www.renyi.hu/pub/algebraic-logic/uc08.pdf

Oron Shagrir and Itamar Pitowsky, "Physical Hypercomputation and the Church–Turing Thesis", Minds and Machines, 13:1 87-101 (2003)
http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.112.6817&rep=rep1&type=pdf
 
  • #177
yossell said:
I had understood it as a mathematical representation of the kinds of processes going on in the physical universe possibly involving continuous quantities themselves varying continuously according to partial differential equations. Though demonstrably representable mathematically, such systems do not necessarily correspond to Turing machines.

Apologies if I got you wrong.

As a matter of fact it is known that monte carlo/random walk methods are used to solve PDE (especially the tough ones) by arriving at the solution via statistical methods. You can look at it as the other way around, nature works via the statistical route, and PDE's are nothing but approximation to how the universe actually works.
 
  • #178
Chalnoth said:
Well, as I argued earlier, whatever the universe actually does must be representable by some definition of computation. Whether or not that definition is the same as Turing computability is another question, but I strongly suspect it is.

If you just define computable as what the universe does, then it's trivial that what the universe does is computable. But that is not the standard definition of computable. And there are well defined, mathematically representable physical processes that do not evolve in a Turing-computable way.
 
  • #179


S.Daedalus said:
In the absence of evidence for a hypercomputable entity, it's indeed more parsimonious to not assume its existence; and frankly, I don't think it's possible to produce such evidence -- as I said, you have to be an oracle to recognize an oracle.
I don't think so. Zero knowledge proofs show that we can test the claim of someone who pretend to have more computing power that we have. Similarly, they must be some way to check whether someone can truly found a busy beaver -which would demonstrate he has access to hypercomputing.
 
  • #180
qsa said:
As a matter of fact it is known that monte carlo/random walk methods are used to solve PDE (especially the tough ones) by arriving at the solution via statistical methods. You can look at it as the other way around, nature works via the statistical route, and PDE's are nothing but approximation to how the universe actually works.

what has this to do with my post?
 

Similar threads

  • · Replies 10 ·
Replies
10
Views
4K
  • · Replies 20 ·
Replies
20
Views
5K
  • · Replies 1 ·
Replies
1
Views
1K
  • · Replies 24 ·
Replies
24
Views
3K
Replies
2
Views
2K
  • · Replies 16 ·
Replies
16
Views
3K
Replies
18
Views
3K
Replies
8
Views
3K
  • · Replies 6 ·
Replies
6
Views
3K
  • · Replies 60 ·
3
Replies
60
Views
7K