Impact of Gödel's incompleteness theorems on a TOE

  • Thread starter Thread starter PhysDrew
  • Start date Start date
  • Tags Tags
    Impact Toe
  • #151


bcrowell said:
Well, I don't want to be unnecessarily argumentative, but it's very clear from your #99 that you did equate them.
Please believe me that I don't!

bcrowell said:
Any theory of physics can be expressed in a formalism that is equiconsistent to real analysis, ...

... the Schrodinger equation can be written using complex analysis, the formalism you need for the Schrodinger equation is equiconsistent with real analysis. That means that to a logician, there is no difference between the mathematical foundations needed for quantum mechanics and Newtonian mechanics.

...

This is why my objection to com.stoer's conflation of physical theories with axiomatic systems is not just a quibble. It's a crucial point.
Sorry; I didn't state what has to be done to convert a physical theory into an axiomatic system. There are two problems
- the "mathematical expressions" of physics are partially ill-defined (QFT)
- a physical theory is more than a mathematical system (*)

(*) in addition you need
1) rules how to ask questions
2) how to apply mathematics
3) and how to interpret the results!


(2) should be clear; let's look at (3) In QM / QFT we are interpreting certain solutions (eigenstates, plane waves, ...) as particles. If your formal system "produces" something like f(x) = exp(ikx) written on a tape then you still do not know what this means. This is step (3) - and w/o step (3) the whole idea is useless. You simply don't know what you system has calculated for you. You don't understand. In order to achieve that you need step (3) - and of course the same applies to (1)
 
Last edited:
Physics news on Phys.org
  • #152


It think it is a bit interesting to see everybodies associations to this topic. Which was after all what the OP asked for.

Since the real numbers came up again, and it's true that it's the number system of choice in physics I'll just throw out again that from the inferenece perspective, the place where real numbers typically enters the picture (if you make a full reconstruction) is either
- as the valuespace of degrees of plausability (in jaynes/cox style reconstruction of classic probability as king o extension to logic)
- as quantification functions of partially ordered sets (as some people work on, knuth's papers is the last ones I read)
- or in the measure theoretic axiomatic perspective it's just axiomatically introduced.

However, as I see it, the choice of uncountable number systems is not obvious. As I see it, if you consider Jaynes reconstruction, but add a slightly more computing-related approach that doesn't consider infinite information sinks and infinite processing times, then it seems to me that the KEY value of the number systems is to "count evidence" in the inference model. Now, is real numbers really a natural choice here? It's very easy to be seduced as I've been myself in the past that real numbers are "more general" than integers and therefore there seems to be no rational objection to postulating that we assign a real number [0,1] to each degree of plausability, or "evidence rating" or whatever.

But the real problem starts once we realize that then we suddently have uncountable and infinite sets of possibilities. And this makes computation problematic. It also makes COMBINING two sets of information in a rational way difficult, because how much information does a piece of the continumm really contain? Apparently the state spaces is uncountably infinite. So is this continuous index needed? It seems to cause problems, that we don't need. How do you even make sure to unambigously count all possibilities to add up to 1, when the set is uncountable? Sure we can often manage by defining integration measures, but there are problems here. Problems that get worse when the theory of computing expectations are developed further.

This problem is the same to both complex and real. So it has bearing on complex, quaterions or other higher multiples, it's about the physical basis or non-basis of the continuum ounting.

S.Daedalus said:
If you start with a system in state x and your theory tells you that its evolution is given by f(x) where f is not a computable function, what are you going to do?

I was thinking the same.

/Fredrik
 
  • #153


Fredrik, you are assuming the universe has to be computable. There is a big difference between saying that the universe obeys a set of differential equations versus saying it is computable.
 
  • #154


D H said:
Fredrik, you are assuming the universe has to be computable.

I didn't understand how you went from computable expectations (which is what I mean) to computable universe?

What I admittedly assume - on the grounds that it's the most rational assumption I can see; and hence justified working hypothesis - is that a generic observer, that is about to form an action, does so by a form of computation based on available evidence. Anything else simply makes my brain throw in the towel. That doesn't make it right though. But I have good confidence in this view.

But "computable universe" I'm not even sure exactly what you mean? Can you explain, then I can see if I agree.

/Fredrik
 
  • #155


S.Daedalus said:
Not sure if I get understand this right -- I mean, basically, you'd want your theory to be computable because you want to compute things with it, no? If you start with a system in state x and your theory tells you that its evolution is given by f(x) where f is not a computable function, what are you going to do? Ask some old hag with a crystal ball?

Or did you mean something else?

Yes. I meant why should one consider physical processes as the effect of computations of some kind of computer.. reduce everything to information processing... bring in Turing machines etc...there seems no clear reason for that. That's also what Stephen Wolfram tried to argue a few years ago, in his way, but to no avail; he took his cellular automata a bit too literal.
 
  • #156


D H said:
Fredrik, you are assuming the universe has to be computable. There is a big difference between saying that the universe obeys a set of differential equations versus saying it is computable.
I'm not so sure. I mean, if we take the simple statement that if x is the configuration now, and f(x) is the configuration at some later time, then we can "compute" it simply by setting up the configuration x and then measuring it at the later time. In effect, the universe must have some means of "computing" the later configuration.

So it's obviously true that if we take "computability" to simply mean, "it is possible to construct a system that computes the result," then the universe must necessarily be computable in this sense. However, the question then arises as to whether or not this definition of computability meshes with, say, the definition of computability in the sense of a Turing machine.

Now, this runs into obvious difficulties in that computability is generally defined in terms of a very specific sort of computer, such as a Turing machine, but I have a strong suspicion that things like the uncountability of the real numbers will mess up any definition of computability.
 
  • #157


suprised said:
Yes. I meant why should one consider physical processes as the effect of computations of some kind of computer.. reduce everything to information processing... bring in Turing machines etc...there seems no clear reason for that. That's also what Stephen Wolfram tried to argue a few years ago, in his way, but to no avail; he took his cellular automata a bit too literal.
However, I think he's right in stressing the notion of computational equivalence. Basically, anything I can describe by a computable theory -- or equivalently, anything I can construct an arbitrarily exact computer simulation of -- is equivalent to whatever one might have in mind in a narrower sense when one speaks about a 'computer'. This is the same equivalence as between Turing machines, lambda calculus, partial recursive functions etc., at least in spirit: whatever one can compute, the other can, too.

Or, perhaps from another perspective, if we can build computers in reality, and can in turn model reality on computers, then in that sense reality (or the universe or whatever enveloping term one might prefer) is equivalent to -- or simply is, for short -- a computer.

This is conjectural and unprovable in the same sense as the Church-Turing thesis: there might be a framework of computation strictly more powerful than Turing machines et al, and similarly, it might be the case that reality can't be completely encompassed by some von Neumann architecture computer -- but I'd argue that it's the more parsimonious assumption to expect this not to be the case, and after all, I have a hard time seeing how one would describe a non-computable universe in an intelligible way; I strongly suspect our brains, in the end, to be describable in terms not too different from classical computation, and certainly, everything we can write down falls well within that paradigm -- so if nothing else, should the universe turn out not to be computable, we'd have to rethink the notion of communicating ideas via papers in journals. :wink:
 
  • #158


Chalnoth said:
However, the question then arises as to whether or not this definition of computability meshes with, say, the definition of computability in the sense of a Turing machine.

I agree with this. I don't think that turings computability model is necessarily the one that is most useful to us here (from the point of view of physicists trying to understand interactions and evolution in terms of some kind of computation.)

The two most obvious objections are hte notion of unbounded tapes and possibly infinite processing time. (*)

Surprised said:
Yes. I meant why should one consider physical processes as the effect of computations of some kind of computer.. reduce everything to information processing.

What I have in mind is that the observer is it's own "the computer", and that computing is IMO mainly analogous to "rational action". So to understand how a system evolves, is pretty much the same as understanding how and why it makes agiven computation. The question is how to describe this computation. I never suggested that turing machine right out the book is the right abstraction. I doubt it.

As to why is the right path I suppose this boils down to confidence built up on subjective expereince and intuition. I feel pretty convinced, but as someone said in an earlier thread is that this is all poetry until someone can make this fly. And I agree. But this applies to any other research program too, like strings.

I see parallells also to the entropic research programs as, the way I see it, entropic gradients is what drives each computer, but I don't think the computations are deductive, they are inductive and still contain some randomness.

(*) I see an analogy between unitary evolution as a SELF-computation where the computer somehow predicts it's own output and evolves, and each measurement "resets" the input and possibly also in some cases evolves the computer hardware. And here the computing is moot, as the comptuer hadrware and input are possible perturbed, therfor the result is what's in the registre at that time, so what is releveant is how much process that is beeing done until the next input arrives. So I think we see evolution of algorithms and computers so as to adapt to a balance between speed and accuracy. Sometimes a quick approximate prediction may be far more valuable to a player than a more accurate answer that simply arrives too late. Problems like this is what guides me in the search for the new "computability model" that might work well for physics abstractions. So probably new computability models may need to deloped as well. I see no reason we need to settle with turing machines.

/Fredrik
 
  • #159


D H said:
There is a big difference between saying that the universe obeys a set of differential equations versus saying it is computable.

The difference becomes smaller is we do not have a computer with sufficiently memory and computing power, that can actually solve that equation without just getting a mess out of rounding errors, which often happens given sufficiently complexity.

Then, it would be much better to have an approximation of hte differential equation, that may be less accurate but a least computable.

Since I think we agree that what we discuss here, theories, have the purposes of producing predictions. If we have a nifty gigantic equations, where there is no way to - with available resources - compute the predictions in an accurate an unambigous way, then we simply don't have a predictions.

This is what I'd even claim that the TOE will be computer dependent. There is no TOE that is compatible and executable on an arbitrary computer. Here I mean computable in a timely manner! Noone has use of a computer that can make the computation in infinite time, as the player is dead befor he has even produced a reaction..

/Fredrik
 
  • #160


Some responses so far have focused on people possible beeing wrong of what turing computable means, but I'm not talking about turing machines. I'm LOOKING for the right computability abstraction, and I just describe from understanding and personal intuition what I think the traits must be - IF we accept this computing-thinking.

I know turing machines have unbounded tapes etc... this is what I'm not talking about turing machines.

/Fredrik
 
  • #161


D H said:
Fredrik, you are assuming the universe has to be computable. There is a big difference between saying that the universe obeys a set of differential equations versus saying it is computable.

This is what I tried to express. The problems arises in complex dynamical systems, eg. While the dynamics might be governed by simple laws, say non-linear differentrial equations, there is often no way to predict what the result would be like. Because initial conditions can never be known to arbitrary precision, while infinitesimal variations can influence the result macroscopically. Just intrinsic thermal or quantum mechanical fluctuations can flip a system into another phase, etc.

Eg one can imagine to build a trigger for an atomic bomb which goes off or not depending on the properties (time, direction) of radioactive decay products of a given single atom. Is there a way to compute with any sort of machine whether the city still exists tomorrow or not? I don't think so.

Similarly, many things we see in nature, are just frozen accidents of history and there is no particular reason for them to be like that rather then a bit (or very) different. I guess the concept of computability makes no real sense here.

Fra said:
Then, it would be much better to have an approximation of hte differential equation, that may be less accurate but a least computable.

This is precisely what I doubt for the reasons explained above.
 
  • #162


Thanks Surprise, we're getting closer. You use almost my arguments for a different conclusion :) I think I see now, the difference is that I think we simply have quite different ideas of what a theory is, and what's the purpose of a theory; which is the core of my argument.

About chaotical dynamical systems you're right I fully agree. but your conclusion is different than mine. But this merely is an argument against reductionism.

I'll type more later and try to explain what I eman

/Fredrik
 
  • #163


I personally don't hold much sway with digital physics. Where is the computer that computes the interaction between a photon and an electron or that determines when a gold-198 nucleus emits a beta particle? Saying that the universe is computable (a la computable functions) isn't parsimony or reductionism. It is a deus ex machina.
 
  • #164


D H said:
Where is the computer that computes the interaction between a photon and an electron or that determines when a gold-198 nucleus emits a beta particle?

That's a relevant question I agree. But to defend the idea without having definite answer, I'd say that answer depends on the observer but, it's the observer that is the computer (abstractly), and the computer itself evolves.

The most interesting perspective is when you consider the observer to be part of the interactions, and here I think that matter, and the microstructure of matter IS the computer if you draw the analogy like

input -> computation -> output
perturbation -> internal reaction -> reaction

But both the computation algorithm and the perturbed systems evolves and learns. So the computer and it's algorithm "improves" the more it interacts. Equilibrium conditions simply means that input is consistent with output, and that the algortihm is stable.

I think if we can classify, such algorithms corresponding to steady states, we may (this is my conjecture) find a one-2-one mapping of the action such "computing players" with the actions of abundant systems in nature, such as elememtary particles atoms etc.

So the logic is I think clear. The question is why would anyone have confidence in this? Is there good reasons to think that this will be fruitful? I think so, but that's just me,

/Fredrik
 
  • #165


Fra said:
perturbation -> internal reaction -> reaction

The predictive value of this idea, is that I expect, by respecting computational and complexity bounds produce constraints on the sets of possible action functions. And this will have interesting scaling properties with complexity (memory size); that I associate closely with the energy scale that converge to something possibly unique as complexity -> zero.

/Fredrik
 
  • #166
Moderation comment:[/color]

I had an OCD moment and got sick of seeing "Godel's incompletenss theorem" misspelled in the title and hence in every response. I changed the title of the thread to make the spelling correct and to distinguish this thread from the other ongoing thread on Gödel's theorems from a purely mathematical perspective.

However, I did not have a CDO moment. I feel no compulsion to change the "Re: Godel's incompletenss theorem" in the title bars in every post of this thread. (CDO = OCD taken to the extent that the letters have to put in proper alphabetic order.)

I hope the change in the title isn't too confusing here.
 
  • #167


Hurkyl said:
There is a class Preline which is a subclass of Point x Point of elements satisfying:
Preline(x,y) = x != y​
and the class Line is the quotient of Preline by the relation:
If (x,y) and (u,v) are in the class Preline, (x,y) =_{Line} (u,v) iff Collinear(x,y,u) and Collinear(x,y,v)​
and the incidence relation "lies on" on Point x Line given by
If (x,y) is in the class Line and z is in the class Point, z lies on (x,y) iff Collinear(x,y,z)​
(and this is a well-defined relation, because it respects =Line)

No set theory involved, just abstraction.

Tarksi's system doesn't contain quantification over classes or types, doesn't contain quantification over the cartesian product of classes or types and doesn't contain an abstraction operator, and the existence of the things you mention in your definition do not follow as a matter of first order logic from the Tarksi's axioms.

The existence of lines, planes and various regions is not a first order consequence of Tarski's axioms. When you add enough further existence axioms (first order, if you wish, in terms of schemas) to give such region, number theory *is* embeddable in the system and the Godel construction goes through.
 
  • #168


D H said:
Where is the computer that computes the interaction between a photon and ...
Of course this only makes sense if the universe -according to MU - is itself the computer (or the Turing machine).
 
  • #169


tom.stoer said:
Of course this only makes sense if the universe -according to MU - is itself the computer (or the Turing machine).

Physical systems are not necessarily realisations of Turing machines. Turing machines are only realized by certain kinds of physical systems. The universe doesn't care whether its processes are Turing-computable.
 
  • #170
Do you know what I mean by MU (the Mathematical Universe)?
 
  • #171
tom.stoer said:
Do you know what I mean by MU (the Mathematical Universe)?

I had understood it as a mathematical representation of the kinds of processes going on in the physical universe possibly involving continuous quantities themselves varying continuously according to partial differential equations. Though demonstrably representable mathematically, such systems do not necessarily correspond to Turing machines.

Apologies if I got you wrong.
 
  • #172


D H said:
I personally don't hold much sway with digital physics. Where is the computer that computes the interaction between a photon and an electron or that determines when a gold-198 nucleus emits a beta particle? Saying that the universe is computable (a la computable functions) isn't parsimony or reductionism. It is a deus ex machina.
In the absence of evidence for a hypercomputable entity, it's indeed more parsimonious to not assume its existence; and frankly, I don't think it's possible to produce such evidence -- as I said, you have to be an oracle to recognize an oracle.
 
  • #173
According to the MU hypothesis the universe is not represented by some axiomatic system / calculational machine / ..., it IS a calculational machine.
 
  • #174
tom.stoer said:
According to the MU hypothesis the universe is not represented by some axiomatic system / calculational machine / ..., it IS a calculational machine.

The is/represents distinction doesn't make any difference to the point I'm making.
 
  • #175
yossell said:
I had understood it as a mathematical representation of the kinds of processes going on in the physical universe possibly involving continuous quantities themselves varying continuously according to partial differential equations. Though demonstrably representable mathematically, such systems do not necessarily correspond to Turing machines.

Apologies if I got you wrong.
Well, as I argued earlier, whatever the universe actually does must be representable by some definition of computation. Whether or not that definition is the same as Turing computability is another question, but I strongly suspect it is.
 
  • #176
S.Daedalus said:
In the absence of evidence for a hypercomputable entity, it's indeed more parsimonious to not assume its existence; and frankly, I don't think it's possible to produce such evidence -- as I said, you have to be an oracle to recognize an oracle.
That is not how science works. The burden is not upon science to disprove some radical concept. The burden is upon you as the proponent of some concept to offer proof of the validity of that concept.

That said, there have been a growing number of papers in the last ten years or so that question the applicability of the Church-Turing thesis to physical reality and that discuss the concept of hypercomputation with respect to the physical universe. Here are just a couple; I'll dig some more up over the weekend.

Hajnal Andréka, István Németi and Péter Németi, "General relativistic hypercomputing and foundation of mathematics", Natural Computing, 8:3 499-516 (2009)
http://www.renyi.hu/pub/algebraic-logic/uc08.pdf

Oron Shagrir and Itamar Pitowsky, "Physical Hypercomputation and the Church–Turing Thesis", Minds and Machines, 13:1 87-101 (2003)
http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.112.6817&rep=rep1&type=pdf
 
  • #177
yossell said:
I had understood it as a mathematical representation of the kinds of processes going on in the physical universe possibly involving continuous quantities themselves varying continuously according to partial differential equations. Though demonstrably representable mathematically, such systems do not necessarily correspond to Turing machines.

Apologies if I got you wrong.

As a matter of fact it is known that monte carlo/random walk methods are used to solve PDE (especially the tough ones) by arriving at the solution via statistical methods. You can look at it as the other way around, nature works via the statistical route, and PDE's are nothing but approximation to how the universe actually works.
 
  • #178
Chalnoth said:
Well, as I argued earlier, whatever the universe actually does must be representable by some definition of computation. Whether or not that definition is the same as Turing computability is another question, but I strongly suspect it is.

If you just define computable as what the universe does, then it's trivial that what the universe does is computable. But that is not the standard definition of computable. And there are well defined, mathematically representable physical processes that do not evolve in a Turing-computable way.
 
  • #179


S.Daedalus said:
In the absence of evidence for a hypercomputable entity, it's indeed more parsimonious to not assume its existence; and frankly, I don't think it's possible to produce such evidence -- as I said, you have to be an oracle to recognize an oracle.
I don't think so. Zero knowledge proofs show that we can test the claim of someone who pretend to have more computing power that we have. Similarly, they must be some way to check whether someone can truly found a busy beaver -which would demonstrate he has access to hypercomputing.
 
  • #180
qsa said:
As a matter of fact it is known that monte carlo/random walk methods are used to solve PDE (especially the tough ones) by arriving at the solution via statistical methods. You can look at it as the other way around, nature works via the statistical route, and PDE's are nothing but approximation to how the universe actually works.

what has this to do with my post?
 
  • #181


D H said:
That is not how science works. The burden is not upon science to disprove some radical concept.
I'm not sure where you think I said it was.

The burden is upon you as the proponent of some concept to offer proof of the validity of that concept.
I can readily demonstrate the existence of computers; I'd be surprised if anybody could say the same about hypercomputers.

The thing is, you're proposing the existence of an entity that we have no evidence for, and that is not necessary in any way to formulate a theory in accord with all observations to date. So since it can be done without, it should be done without.

Of course, if someone manages to demonstrate actual hypercomputation occurring in nature, all of this is falsified. But still, my question stands: If somebody puts a device on my desk and claims it to be a hypercomputer -- how do I test this claim?
 
  • #182


Lievo said:
I don't think so. Zero knowledge proofs show that we can test the claim of someone who pretend to have more computing power that we have. Similarly, they must be some way to check whether someone can truly found a busy beaver -which would demonstrate he has access to hypercomputing.
Of course, you can convince another on a statistical basis that you possesses a hypercomputation-capable device, but never in any absolute sense; in any given scenario, you need only more conventional computational power than your opponent has access to, to convincingly fake having a hypercomputer.

One might argue that this is similar to never being able to tell whether a theory is true in an 'absolute sense', but there is in some sense an infinite difference between a device being actually a hypercomputer and merely pretending to be, while the difference between a strictly false, but approximately right theory and the true behaviour of some system is small, and can be gauged by experiment. In other words, the statistics tell you 'how right' or 'how wrong' your theory can at most be, while they don't tell you 'how hypercomputational' some gadget is.
 
  • #183
yossell said:
If you just define computable as what the universe does, then it's trivial that what the universe does is computable.
Well, as I stated earlier, if the universe does something, then you can build a "computer" that calculates what the universe does simply by setting up the system and measuring it later.
 
  • #184
yossell said:
what has this to do with my post?


I was not criticizing you. I was just clarifying the concept of the use of differential equations(which you mentioned as a possibility through your rough understanding of MUH) in describing nature. In my opinion nature is made of math (more like random numbers with logic), but the system is inherently statistical by which we use all sorts of math to approximate it, especially using differential equations (equivalently path integrals).
 
  • #185
Chalnoth said:
Well, as I stated earlier, if the universe does something, then you can build a "computer" that calculates what the universe does simply by setting up the system and measuring it later.

And this is not what is meant by computable in computability theory, and certainly doesn't guarantee that whatever it is you are calculating is Turing-computable.
 
  • #186
Chalnoth said:
Well, as I stated earlier, if the universe does something, then you can build a "computer" that calculates what the universe does simply by setting up the system and measuring it later.
No, you can't (not in all cases). Let me explain what I have in mind (it's not yet formal).

Suppose the universe is an axiomatic system that produces a theorem (according to Gödelization) or that produces a "number" on a tape (where the tape = the number = the universe) according to some algorithm. Now one can measure the complexity of the algorithm; and b/c the universe is a nice guy it decided to use the shortes and most efficient algorithm to do that. So unfortunately there is no shortcut to verify the calculation, there is no "simulation" of what the universe is doing.

Think about one specific algorithm A and about the set of all possible Turing machines {T}. There is certainly one Turing machine TA that does the calculation for A in the most efficient way (with the minimum number of steps). If this pair (A, TA) is our universe, you will never be able to prove this within our universe. You can't be faster than the universe itself with its calculation.

In addition (as we mentioned earlier) it is not clear whether the universe is (equivalent to) a Turing machine, as the latter one has (by definition) found a solution as soon as it stops (the solution is what has been written on the tape). But the universe may not stop - nevertheless it produces some output, namely its own time evolution. I am still not sure whether one has to generalize the concept of Turing machines slightly.
 
  • #187


S.Daedalus said:
you need only more conventional computational power than your opponent has access to, to convincingly fake having a hypercomputer.
Well, in fact, no. 0K proofs allow you to check that your opponent have access to PSPACE power, even if there are many logical steps between you and her. But I recognize I don't know a 0K proof proving hypercomputable power.

S.Daedalus said:
If somebody puts a device on my desk and claims it to be a hypercomputer -- how do I test this claim?
Look at Benett's writings on the Maxwell' demon. What prevents the demon to give you free enthalpy is the fact that he needs to clear is memory. So if you have an hypercomputable device, let's talk to your favorite demon:
-Give me free energy.
-I can't, I don't have enough memory space
-Just compress the information you have on your tape to have some free space
-that wouldn't work my dear, compression is not computable.
-here's a device that make hypercomputation
-cool! Then I can I create free energy! And I'll keep it for me, because I'm a demon AHAHAHA!
 
  • #188


S.Daedalus said:
If somebody puts a device on my desk and claims it to be a hypercomputer -- how do I test this claim?
One other way to say the same: give her a random series of boolean vectors of size n. If she's able to answer a series of TM of size n'<n that output your boolean vectors, you can be exponantially confident that she has access to hypercomputing. This is not a 0K proof, but it works.
 
  • #189


Lievo said:
Well, in fact, no. 0K proofs allow you to check that your opponent have access to PSPACE power, even if there are many logical steps between you and her. But I recognize I don't know a 0K proof proving hypercomputable power.
Eh, I was writing up a long reply to you, but the computer ate it; it's probably for the best, since it's taken us too far off topic, and I've become a bit over-argumentative on the issue (sorry if I stepped on anybody's toes, I sometimes get a bit too excited).

As a last comment, note that I don't deny the possibility of a probabilistic proof of hypercomputation, which your examples amount to, but claim that one never can be absolutely certain of the hypercomputational capacities of some device.
 
  • #190


friend said:
Godel's incompleteness theorem only applies to systems that include math. But math is an abstract construction developed for our convenience. But I consider that it might not be applicable to physical things. For example, when counting sheep, you can start counting with any of them. There isn't anyone particular sheep that must be labeled "one", etc. And so it would seem that you cannot assign an axiom to anyone particular physical thing. So you can't say this thing is represented by a axiom that is already included in your theory or is represented by an axiom that is not yet proven by your theory.

If the laws of physics can be represented in a coordinate independent fashion and reduced to algebraic expressions that do not depend on any units of measure, then it doesn't seem to dependent on counting anything, distance or particular units when measuring. And if particular units are not required, then can Godel's incompleteness theorem still be applied? You still have addition, subtraction, multiplication, and division as in math, but the particular numbers don't seem to be physically meaningful. And if you cannot assign a definate number value to some measurement (since you can always change the units of measure being used), then is incompleteness still a possiblity? Thanks.
 
Last edited:
  • #191


S.Daedalus said:
As a last comment, note that I don't deny the possibility of a probabilistic proof of hypercomputation, which your examples amount to, but claim that one never can be absolutely certain of the hypercomputational capacities of some device.
Well I won't argue that. But if you follow the protocol above, it will quickly have become far more likely that you had an heart attack, toke a meteorit on your head, experienced a Richter 12 earthquake, and fall into a black hole formed from spontaneous vacuum fluctuation, all at the same time, than to see your opponent being able to answer by chance. But yes, this would still be just a probabilistic proof.

To be honest, if I was really seeing someone (statistically) demonstrating hypercomputation, I won't interpret this as hypercomputation on first sight. I'd interpret this as a clear indication that my understanding of the math involved sucks. ;-)

friend said:
If the laws of physics (...) then is incompleteness still a possiblity? Thanks.
In my view, if the TOE really depends on real numbers, then it's not computable, not consistent, and then Godel's incompletness of course won't apply.
 
  • #192


Lievo said:
In my view, if the TOE really depends on real numbers, then it's not computable, not consistent, and then Godel's incompletness of course won't apply.
Not a very good view. An inconsistent theory is utterly worthless. There's nothing wrong per se with an incomplete theory.
 
  • #193


D H said:
Lievo said:
then Godel's incompletness of course won't apply
There's nothing wrong per se with an incomplete theory.
With all due respect, D H, you don't read me. From the begining, in fact.
 
  • #194


suprised said:
Fra said:
Then, it would be much better to have an approximation of hte differential equation, that may be less accurate but a least computable.
This is precisely what I doubt for the reasons explained above.

Mmm.. This thread is getting about as stirred up as one could expect. I had to rethink what exactly we are discussing here.

About the chaotic dynamical systems I agree, I have nothing to add there.

For me, in the inference perspective I've chosen, a theory is an interaction tool = an inference model, not an ontological statement of reality. So a theory in this sense is valued after how well it serves it's purpose as an interaction tool.

I consider the theory, to be part of the hidden prior information. Theory is like "condensed" information. This doesn't mean it can't melt down and revise.

A theory that doesn't allow computations of expectations with reasonable effiency, are simply useless.

An inference system based on a complex dynamical system sensitive to initial conditions are simply uselsess and lack of predictive value unless a solution can be computed. Then encoding such a model is a waste of resources. The more fit inference would probably be based on statistical models and finding the stable macroscopic variables.

So what's important is not just computability, but computing effiency as well.

Clearly for a computer with limited time and memory, the set of computable algorithms are smaller. This is why computability in the physical sense, must be depend on the observer. There is IMO, not objective meaning of what's computable and what's not, that's what I mean with treating a theory as an interaction tool for a given observer.

The idea (IMHO) is that this relative computability, is that a given observer is simply indifferent to any non-computable causations, and this idea may also explain unification in the sense that the set of possible interactions are bound to shrink as we scale down the computational complexity. Also some "interaction types" are simply invisible to a sufficiently "simple" observer, since it can never compute and hence infer it's existence.

When I insist on this computability, I don't mean that the entire universe is computable in the sense of a gigantic cellulaor automata. This is what some other people thing, but it's not what I talk about. I see it as interacting systems where EXPECTATIONS follow cellular automata, but where some things are simply not computable.

OK, maybe his is the confusion: With computable, then I mean that EXPECTATIONS are computable. I do not mean that ACTUALY evolution is computable. The Actual future is always in principle non-computable, from the point of view of a given observer. But the trick is that I think that the actions of any system only responds to it's expectationsof the future, not to the actual future.

I feel like I just get deeper into the mud or mutual confusion here so may I should stop. Maybe we can consider this thread a collective painting.

/Fredrik
 
  • #195


Fra said:
OK, maybe his is the confusion: With computable, then I mean that EXPECTATIONS are computable. I do not mean that ACTUALY evolution is computable. The Actual future is always in principle non-computable, from the point of view of a given observer.

I'm not sure if this helps, but note the similarity to this and to the unitarity of evolution IN BETWEEN measurements (ie. the EXPECTED evolution), and the possible non-unitary evolution of actualy evolution - plenty of measurements included)

In my picture this similarity isn't a conicidence.

/Fredrik
 
  • #196


yossell said:
Tarksi's system doesn't contain quantification over classes or types, doesn't contain quantification over the cartesian product of classes or types and doesn't contain an abstraction operator,
*sigh* Last time: these things* are part of first-order logic itself, and has absolutely nothing to do with the formal language and axioms Tarski chose to present a theory of Euclidean geometry.

*: except I don't know what you mean by "abstraction operator". When I said abstraction, I was using it as the natural language term describing a quality of how mathematicians develop mathematics

While formulations of first-order logic vary, these features are always present. Some forms of Type theory or an approach to logic based on Category theory would have all of these explicitly. Even untyped has them in the sense that any statement involving these features can be algorithmically converted into a formula in the untyped logic's language.

And since you are perpetually worried about whether or not all of these bells and whistles change things relative to a stripped-down version of logic, allow me to ease your worry by stating the following theorem:

Let T be a formal theory of untyped logic. Let C be its syntactic category, and T' be the theory described by C (which has product types, subtypes defined by predicates, quotient types, and all of those extra features I've been talking about). Let S be a statement in the language of T. Then S is a theorem of T if and only if its analog in the language of T' is a theorem of T'.

Furthermore, there is* a one-to-one correspondence between models of T and models of T'.​

*: Generally, definitions are usually to be more relaxed -- so that the correspondence is not strictly one-to-one, but still remains one-to-one in every relevant way. However, strict definitions are possible so that this truly is one-to-one.
 
  • #197


Hurkyl said:
*sigh* Last time: these things* are part of first-order logic itself, and has absolutely nothing to do with the formal language and axioms Tarski chose to present a theory of Euclidean geometry.

Sigh - for the *last* time the existence of types is not a first order consequence of Tarksi's theory. Sigh - a first order theory which says there are exactly three objects has models which contain only three objects - the models don't contain further types (I say this because I sometimes try to explain your persistent misreadings of my posts by your thinking I don't think there can be first order theories of types or sets). Sigh - you can add a first order axioms saying that certain types or type like things exist or sets or sums exist. Sigh - but this is an extension of the theory.

From your last paragraph it looks as though you're saying that this extra machinery is added in such a way so that it is conservative over the language of the original theory. It can be so added - but if you remember that, in a first order language, we're typically replacing second order axioms which induction schemas, these schemas have more instances if the language of the theory is expanded. If it is expanded, the new theory is not necessarily conservative over the original theory. And, indeed, as Shapiro showed, you can embed the numbers into geometry.

But I too am tired of repeating myself, and I can live without your sighs, so goodnight.
 
Last edited:
  • #198
I think Hawking was reasoning by analogy, not necessarily proposing that Godel's theorem literally applies to the universe. Instead, that the unexpected (previously) fact that a large class of finite systems of axioms must be incomplete makes it plausible that the universe cannot be completely described with a finite set of laws. Adding to this plausibility is that, so far, all probings into further distances and times and energy regimes have uncovered new laws. So far, there is no sign of an end to this.

Turning the question around, I asked myself: Have I ever seen even the slightest actual evidence that the universe can be described by a finite set of laws (let alone laws motivated by encompassing just known phenomenology, e.g. 4 forces) ? My answer is that I am not aware of *any* evidence *at all* that this should be expected. Thus I conclude that the only basis for this expectation is hubris and wishful thinking.
 
  • #199
PAllen said:
Have I ever seen even the slightest actual evidence that the universe can be described by a finite set of laws (let alone laws motivated by encompassing just known phenomenology, e.g. 4 forces) ?
If Godel incompletness applies and you assume that the universe is consistent, you can't have less than an infinite number of laws/axioms in your TOE for a complete description.

PAllen said:
Thus I conclude that the only basis for this expectation is hubris and wishful thinking.
Does this change your conclusion?
 
  • #200
Lievo said:
If Godel incompletness applies and you assume that the universe is consistent, you can't have less than an infinite number of laws/axioms in your TOE for a complete description.


Does this change your conclusion?

No, it supports it.
 

Similar threads

Replies
10
Views
4K
Replies
20
Views
4K
Replies
1
Views
972
Replies
24
Views
3K
Replies
4
Views
417
Replies
6
Views
3K
Back
Top