Impact of Gödel's incompleteness theorems on a TOE

  • Thread starter Thread starter PhysDrew
  • Start date Start date
  • Tags Tags
    Impact Toe
  • #251
Chalnoth said:
Nope. All possible physical events would still be consequences of a true TOE. It's just that we would be doomed to never know all of the consequences of the theory, in that whatever list of proven-true or proven-false statements we manage to come up with, it is guaranteed that there are still more true or false statements out there that we have yet to prove.

I think this is true of even deductive logic: you can not in practice write out every statement possible even though any statement that is written out can be proven true or false. And we know deductive logic is complete.

As I understand it, incomplete means that there are true statements that are inherently unprovable by the listed axiom of the system. So if a system of physical law is incomplete, then there are events that do occur but are not describle/reducible/provable with that list of physical laws. So when I say, "does NOT describe ALL possible physical events", I mean is not provable by the axiomized list of physical laws. So I still stand by my prior statement.
 
Physics news on Phys.org
  • #252
friend said:
I think this is true of even deductive logic: you can not in practice write out every statement possible even though any statement that is written out can be proven true or false. And we know deductive logic is complete.
That's not quite the same thing. Deductive logic is complete in the sense that it is possible to write down every possible abstract form that comprises a true statement within the theory. Obviously there are an infinite number of ways of applying a particular abstract form, but there are only a finite number of abstract forms that are also true. The finite number of abstract forms within deductive logic is a consequence of its completeness.

friend said:
As I understand it, incomplete means that there are true statements that are inherently unprovable by the listed axiom of the system. So if a system of physical law is incomplete, then there are events that do occur but are not describle/reducible/provable with that list of physical laws. So when I say, "does NOT describe ALL possible physical events", I mean is not provable by the axiomized list of physical laws. So I still stand by my prior statement.
I suppose that's correct.
 
  • #253
Chalnoth said:
Let me put it this way: if it is possible to describe reality as a set of distinct but interrelated physical systems, then it is also possible to describe reality as one physical system. If, in one description of reality, some physical law changes with time, then in another description the physical laws remain unchanged while the apparent change is explained by the dynamics of the unchanging theory.

Basically, if there is a way that reality behaves, then there is a way to accurately describe that behavior. Because of this, it must be possible to narrow it all down to one single self-consistent structure (though that structure may be extremely complex).

I understand what you say but I actually I still disagree.

My point is that not all changes are decidable. You assume that all changes are predictable in the deductive sense, and thus can be expected. I arguet that the physical limits of encoding and computing expectations makes this not possible.

Chalnoth said:
Let me put it this way: if it is possible to describe reality as a set of distinct but interrelated physical systems, then it is also possible to describe reality as one physical system.

This is true, but I'm trying to explicitly acknowledge that any inference, and expectation is encoded by a physical system (observer), which means that any expectation only contains statements about it's own observable neigbourhood, and moreover only a PART of it, as all information about the environment can not possibly be encoded by an finite observer.

Chalnoth said:
If, in one description of reality, some physical law changes with time, then in another description the physical laws remain unchanged while the apparent change is explained by the dynamics of the unchanging theory.

Again, I partially agree with this. What you describe is a part of what happens also in my view, but you assume that there can be a localized expectation of ALL changes of the future. I don't think so. What you say only makes perfect sense when we study small subsystems where the experiment can be repeated over and over again, and that we have capacity to store all data.

What you say, is effectively true for particle physics because there this subsystem condition applies. But it fails for cosmological models, and it would also fail for an inside view of particle physics where one tries to "scale" the theory down to say a priton. This is IMO then also becomes related to the lack of unification.

Some parts of my arguments are also in these talks

- http://pirsa.org/08100049/ "On the reality of time and the evolution of laws" by Smolin, except I think Smolin is not radical enough

In here, Smoling talks about EVOLVING law in the darwinian sene, and a guy in the audience thinks just like you that - OK, if the law evolves they obviously isn't here is a meta law the describes how? - Smolin answers he doesn't know, but I think the answer must be no. And it's because such law would not be decidable in general.

But it's still true, in a constrained sense that what is undecidable to one observer, can be decidable to another (usually more complex) observer. This is how it works in particle physics. The observer is essentially the entire lab fram, and it's extremely complex nad effecticely "monitors" the entire environment of the volume where things happen.

So I think your suggest is partly right, but it can never be complete. And I think this is an important point.

- http://pirsa.org/10050053/, "Laws and time in cosmology", by Unger

These guys talk about cosmo laws, but if you combine this with the search for a theory of how laws scale (like a replacement of RG) then this gets implications also for partile physics, but there the implication isn't that laws evolve from our perspective (they don't, at least not effectively so) but the evolution is relative to the particles, and understanding this might help the unification program. (or so I think, but it's just my opinon of course)

Chalnoth said:
Because of this, it must be possible to narrow it all down to one single self-consistent structure (though that structure may be extremely complex).

Ok this is a good point. It's actually because it's soo extremely complex that it, at the end of the day in fact ISN'T possible for a finite observer. ALSO, what you suggest seems to only work in retrospect. Ie. "record history" if it fits into your memory, and call the recorded pattern a law. If the future violates that pattern, record the further future and "extend the law". I think it should be clear why such approach is bound to be sterile.

/Fredrik
 
  • #254
Chalnoth said:
But no, finding a theory of everything would not be a discovery that we are in equilibrium. The two are completely and utterly different things. I can make neither heads nor tails of what you mean by equilibrium in your post, but it clearly has nothing whatsoever to do with the thermodynamic meaning.

We can drop that discussion as it gets to many focuses in one thread, but what I mean is equilibration between interacting systems whose actions are ruled be expectations following from expected laws. When these two systems have different expectations there is a conflict.

Unger who talks works in social theory, and there analogies are clear. Social laws are negotiated laws. You can break them, but at a price. Also the laws are always evolving,but not in a way that is predictable to any player. Wether someone "a god or so" outside the game "in principle" could predict it, is in fact irrelevant to the game itself.

/Fredrik
 
  • #255
Fra said:
My point is that not all changes are decidable. You assume that all changes are predictable in the deductive sense, and thus can be expected. I arguet that the physical limits of encoding and computing expectations makes this not possible.
Possibly. As I argued previously, I strongly suspect that whatever fundamental theory there is, that fundamental theory is likely to be computable. If the Church-Turing thesis is correct, then setting up a system and later measuring the result is a form of computation identical to Turing computation, which would mean that the fundamental theory must be computable in the Turing sense. From this, if we had the fundamental theory, and we had a complete description of the system and everything it interacts with, then, given sufficient computer power, we could compute how the system changes in time.

In this sense, all changes would be perfectly predictable and decidable. However, in practice we could never determine the initial state of the system in question perfectly, so there would always be room for error.
 
  • #256
Chalnoth said:
In this sense, all changes would be perfectly predictable and decidable. However, in practice we could never determine the initial state of the system in question perfectly, so there would always be room for error.

You describe here the current scheme of physics, that Smolin in that talk referred to as "the Newtonian scheme", that doesn't mean it's classical, because even QM and GR adheres to this scheme.

The scheme is the initial or boundary conditions + deductive system => predctions. ALL uncertainty is relayed to initial conditions. What I suggest is that we generall have.

initial conditions + inference system (that is not deductive but inductive) -> expectation. And the inference systems itself also evolves, just like a learning algorithm. And therfor there is an uncertainty not only in the premise, but in the infereences themsevesl (the deductive system).

So I do not accept, or even find it plausible that the physical processes can be abstracted as perfect deductions, where all uncertainty is relayed to smearing into a predetermined large statespace. This SCHEME is exactly what I think is not right, and it's this scheme that Smolin also attacks in his talk.

I also think the turing definition of computable isn't the best for physics. I also somehow like the computational analogy, but, computational effiency is important too. What is computable given infinite time and infinite resources seems like a not very useful classification.

I see two areas where more work is needed...

1. To try to see what physical constraints on the set of possible actions, that we can infer from only considering algorithms that are computable with given effiency and resources. And consider how these constraints SCALE with the same.

2. To try to see how the encoding structure of an algorithm is revised in the light of evidence that suggests it's expectations are off. Sometimes a state change isn't possible, sometimes the statespace itself needs to deform as well if there is no consistent revision possible withing the given "axiomatic system". But to just always keeps expanding it, like a gödel expansion also doesn't work because this entire process is bound tobe constrained by current resources. So adding complexity, requires us to remove some complexiy elsewhere unless we manage to increase the total complexity. I think this relates to generation of mass.

3. Essentially we are looking for howto abstract the optimal learning algorithm. But of course by the same logic, no such tihngs is static, as it also keeps evolving. Here minor inconsistencies are just potential for improvement and development, and somehow I expect thet that inconsistences are even directing the deformation required as to define some arrow of time. The arrow of time, or computation, is possible as to always decrease and resolve inconsistencies. But this this is a dynamical thing, at any time, there is bound to exists inconsistencies.

/Fredrik
 
  • #257
That frame of mind is useful for nearly all of physics (nearly all of science, actually, because all we have today are effective theories). It is not, however, useful when considering a theory of everything. If we can narrow down possible theories of everything through demanding self-consistency (and perhaps computability) to the point that we can definitively determine which theory applies to our reality, then we can genuinely consider what you call the Newtonian scheme.
 
  • #258
Chalnoth said:
It is not, however, useful when considering a theory of everything

I think that remains to be seen, which scheme that scores :)

Chalnoth said:
to the point that we can definitively determine which theory applies to our reality

It's just that I do not see that this will ever happen. I don't think it's possible even in principle. At best, we can find an EXPECTATION, that we THINK is reality, and as long as our expectations are consistently met, then it's an effective model and corresponds to a kind of equilibrium as I see it.

But anyway, I don't think the major quest is to characterize the utopia here, it's to try to find a rational way forward. At least my understanding is that applying the Newtonian scheme to "TOE" (I think we are all agree what the TOE is here: unification of all KNOWN interactions), consistently leads to absurdly large landscapes of various kinds. In an evolving model, the state space (even the theory landscape) is evolving, and is never larger than necessarily for flexibility. Too large flexibility leads to detrimental responsiveness and too much complexity.

/Fredrik
 
  • #259
Did you check Smolins and Ungers arguments against this "Newtonian scheme"?

I personally don't think their arguments are the best of it's kind, but at least the talks contains some good grains and it's accessible online, and it's worth listening to.

Like I mentioned before I see two main objections to these scheme

1. It makes sense only if you have unlimited computational resources and computing time (something that clearly is NOT a sensible premise IMHO, it may do for philosophical or logic papers, but not for physics).

2. Even given infinite computation time, the result will be infinitely complex and it could be no way to encode physically these scheme. So the scheme is bound to be truncated. This means that the "optimal algorithm" itself gets truncated, and then it's no longer necessarily optimal anymore! Since the optimization now has the constraints of finite complexity of result and certain effiency of computations - this is exactly why we need to understand why "optimal inference" needs to the "scaled" between different observers. This will not be a simple deterministic scaling, since parts of it contains negotiations, and time dimension. due to inertia of opinons, negotiations also take time (processing time). I think we need both decidable expectations and darwin style evolution components to understand this.

There is one think I think it's important that Smoling does not even mention. Smoling mostly refers to obvious thinks that phsical law as known by HUMAN SCIENCE has evolved. This is true, but this is quite obvious. I think the interesting perspective is when you consider how interactions evolve from the point of view of a general system. This has impacts to unification and emergence and breaking of symmetry in physics. This is where Smolings arguments are the weakest... but then this is new thinking... I think there is a lot more developlemt to expect here.

All this I associated to when you said that the wave function collapse is nonsense. I think it's because your analysis seems to work in the Newtonian mode. I still insist that there are other quite promising (of course I think they are far more promising:) ways to view that.

/Fredrik
 
Last edited:
  • #260
Fra said:
1. It makes sense only if you have unlimited computational resources and computing time (something that clearly is NOT a sensible premise IMHO, it may do for philosophical or logic papers, but not for physics).
Nah. It just means that the fundamental theory (whatever it may be) is unlikely to be useful for doing most calculations. Just to name an example, we still routinely use Newtonian physics even though we know it's wrong.

A more important consequence of discovering a fundamental theory would be deriving more general results from it, such as an effective theory of quantum gravity, or an effective theory of quantum electrodynamics that doesn't have the infinities of the current theory.

Fra said:
2. Even given infinite computation time, the result will be infinitely complex and it could be no way to encode physically these scheme.
Very complex, possibly. Infinitely complex, certainly not. Unless you are actually talking about trying to do exact calculations with the wavefunction of the entire universe. But that is a fool's errand.

Fra said:
All this I associated to when you said that the wave function collapse is nonsense. I think it's because your analysis seems to work in the Newtonian mode. I still insist that there are other quite promising (of course I think they are far more promising:) ways to view that.
Huh? If your framework doesn't count a failure to describe a certain physical behavior as a strike against a physical theory when we have a competing theory that fully explains the physical behavior in question, then your framework is worth about as much as used toilet paper.
 
  • #261
Chalnoth said:
Huh? If your framework doesn't count a failure to describe a certain physical behavior as a strike against a physical theory when we have a competing theory that fully explains the physical behavior in question, then your framework is worth about as much as used toilet paper.

The collapse that is removed be decoherence style approached, is simply that you consider a new LARGER system (and a DIFFERENT, larger observer), that consists of the original system + observer. I know of Zurek's papers etc. Zureks has some very good views, and that perspective is PART of the truth. But it is not enough.

The expected evolution of that system has no collapse, sure. This does not contradict that there is collapses in other views.

So if this is what you refer to as the solution, it's not a solution to the original problem. OTOH, I don't think the original "problem" IS a problem.

In particular this prescription of consider a new larger system that incorporates the observer is subject to the issues we just discussed. It indeed does work! but only as long as we constrain ourselves to relatively speaking small subsystems (where small means low comlpexity relative to the observing system).

/Fredrik
 
  • #262
Fra said:
The collapse that is removed be decoherence style approached, is simply that you consider a new LARGER system (and a DIFFERENT, larger observer), that consists of the original system + observer. I know of Zurek's papers etc. Zureks has some very good views, and that perspective is PART of the truth. But it is not enough.

The expected evolution of that system has no collapse, sure. This does not contradict that there is collapses in other views.

So if this is what you refer to as the solution, it's not a solution to the original problem. OTOH, I don't think the original "problem" IS a problem.

In particular this prescription of consider a new larger system that incorporates the observer is subject to the issues we just discussed. It indeed does work! but only as long as we constrain ourselves to relatively speaking small subsystems (where small means low comlpexity relative to the observing system).

/Fredrik
I have no idea what you're trying to say.
 
  • #263
I think what should be the exploit here, is that the fact that from the inside perspective there ARE collapses, does get observable consequences of the behaviour of matter. Ie. it has observable consequences for other observers; this understanding will increase the predictive power, not decrease it. We're not giving anything up as I see it by considering the collapse, just acknowledging how things probably work, and also acknowledging that we do learn ongoingly.

/Fredrik
 
  • #264
Chalnoth said:
I have no idea what you're trying to say.

Hmm... ok, maybe I jumped into conclusions. I was basing my response on what I thought you would say.

So maybe we take a step back. What did you refer to with

"when we have a competing theory that fully explains the physical behavior in question"

I based my response of what Ithought you meant, but maybe I was mistaken.

/Fredrik
 
  • #265
Have we talked about whether completeness equates to determinism? Is the TOE deterministic?
 
  • #266
We don't have a TOE, so who knows? Quantum mechanics isn't deterministic. Radioactive decay is, as far as we know, purely random. Even Newtonian mechanics isn't deterministic.
 
  • #267
D H said:
We don't have a TOE, so who knows? Quantum mechanics isn't deterministic. Radioactive decay is, as far as we know, purely random. Even Newtonian mechanics isn't deterministic.

So the question is whether indeterminism is proof of incompleteness.
 
  • #268
friend said:
So the question is whether indeterminism is proof of incompleteness.
Not at all, for many reasons. Since we don't have a TOE, it is a bit silly to ask whether a TOE is deterministic. Who knows -- it might come up with a deterministic (in the sense of quantum determinism) explanation for radioactive decay.

Secondly, lack of determinism does not mean "incomplete". They are completely separate concepts.

Thirdly, physicists do not cares whether a TOE is deterministic or complete (complete in the sense of Gödel's incompleteness theorems). You continue to misrepresent what a TOE would be. A TOE would describe all interactions. Period. Nobody claims it will describe all outcomes.
 
  • #269
D H said:
Not at all, for many reasons. Since we don't have a TOE, it is a bit silly to ask whether a TOE is deterministic. Who knows -- it might come up with a deterministic (in the sense of quantum determinism) explanation for radioactive decay.
It's not silly to ask if a TOE is deterministic. It may be that this is one of the defining characteristics of the TOE so that this is how we know when we have achieved it.

D H said:
Secondly, lack of determinism does not mean "incomplete". They are completely separate concepts.
Do you expect me to take your word for it? Or do you have some reasoning for this statement? At this point I am not at all sure that determinism does not equate to completeness.

D H said:
Thirdly, physicists do not cares whether a TOE is deterministic or complete (complete in the sense of Gödel's incompleteness theorems). You continue to misrepresent what a TOE would be. A TOE would describe all interactions. Period. Nobody claims it will describe all outcomes.
At this point I'm not representing anything. I'm only asking questions. And just exactly how would we know that we have described "ALL" interactions?
 
  • #270
Well, I think determinism is more likely linked to computability than anything else.
 
  • #271
Fra said:
Hmm... ok, maybe I jumped into conclusions. I was basing my response on what I thought you would say.

So maybe we take a step back. What did you refer to with

"when we have a competing theory that fully explains the physical behavior in question"

I based my response of what Ithought you meant, but maybe I was mistaken.

/Fredrik
What I meant is that quantum decoherence fully explains the appearance of collapse, and reduces to the Copenhagen interpretation in the limit of complete decoherence. Thus the many worlds interpretation makes the same predictions as the Copenhagen interpretation in all experiments far from the boundary of collapse. But what's more, because the description of the appearance of collapse is exact, decoherence makes predictions about experiments at the boundary of collapse, while the Copenhagen interpretation does not.
 
  • #272
Chalnoth said:
What I meant is that quantum decoherence fully explains the appearance of collapse, and reduces to the Copenhagen interpretation in the limit of complete decoherence. Thus the many worlds interpretation makes the same predictions as the Copenhagen interpretation in all experiments far from the boundary of collapse. But what's more, because the description of the appearance of collapse is exact, decoherence makes predictions about experiments at the boundary of collapse, while the Copenhagen interpretation does not.

Ok, that was exactly what I thought you meant.

So clearly we disagree about our views on this. I certainly understand decoherence and it is partly right, I mean there is nothing more wrong about decoherence than anything else, but it does not answer the same question, to which the collapse is the answer. This is what I tried to say above.

Maybe I might try to explain again, but OTOH, I am not sure if it helps if we simply disagree.

Let me put it like this, if you accept the environment as an infintie information sink etc then sure decoherence sort of does resolve the collapse, but that construction doesn't help, if the actual observer is part of the system, which it is. So I am convince that those who are satisfied with decoherence, really do not see the same problem as I do.

I'm not saying decoherence is baloney, it's obviously not. The decoherence mechanism as such is correct, but it is posing a difference question, but yet pretends to answer the original one, which it didn't.

/Fredrik
 
  • #273
Fra said:
Let me put it like this, if you accept the environment as an infintie information sink etc then sure decoherence sort of does resolve the collapse,
The environment is certainly not an infinite information sink. But it is enough of one that it might as well be infinite for the majority of situations, as even for moderately-sized interacting systems interference times rapidly grow beyond the age of the universe.

Fra said:
but that construction doesn't help, if the actual observer is part of the system,
Huh? The whole reason why decoherence is able to say anything at all about the appearance of collapse is precisely because the observer is part of the system: when decoherence occurs, the observer loses information about all but one component of the wavefunction, which looks like collapse.
 
  • #274
Chalnoth said:
The whole reason why decoherence is able to say anything at all about the appearance of collapse is precisely because the observer is part of the system
Exactly. But when you allow yourself to put the observer in, then you behave exactly as anyone using Copenhagen interpretation. In other words, decoherence better hide the problem you saw with CI, but does not solve it.
 
  • #275
Lievo said:
Exactly. But when you allow yourself to put the observer in, then you behave exactly as anyone using Copenhagen interpretation. In other words, decoherence better hide the problem you saw with CI, but does not solve it.
This is incorrect. In the many worlds interpretation, the observer is completely irrelevant. The appearance of collapse merely stems from interactions between systems. So the way this is dealt with is you set up an experiment that slowly turns on an interaction but doesn't perform any sort of measurement using that interaction. Later measurements are performed to see whether or not the wavefunction collapsed.

In the Copenhagen interpretation, the result is ambiguous, because it is completely unspecified whether turning on an interaction without performing a measurement will do anything to the wave function. But in the many worlds interpretation, the result is definite and exact.
 
  • #276
Chalnoth said:
This is incorrect. In the many worlds interpretation, the observer is completely irrelevant. The appearance of collapse merely stems from interactions between systems. So the way this is dealt with is you set up an experiment that slowly turns on an interaction but doesn't perform any sort of measurement using that interaction. Later measurements are performed to see whether or not the wavefunction collapsed.

In the Copenhagen interpretation, the result is ambiguous, because it is completely unspecified whether turning on an interaction without performing a measurement will do anything to the wave function. But in the many worlds interpretation, the result is definite and exact.

Maybe you can help me. I've run across several papers (e.g. by Adler, Kent, etc.) claiming proofs that no variation of many worlds or decoherence can account for the Born probability rule. I haven't found any papers referencing these that claim to answer them in full. Do you know of any?

Thanks.
 
  • #277
Chalnoth said:
The whole reason why decoherence is able to say anything at all about the appearance of collapse is precisely because the observer is part of the system
Chalnoth said:
In the many worlds interpretation, the observer is completely irrelevant.
*cought* *cought*

Chalnoth said:
In the Copenhagen interpretation, the result is ambiguous, because (...)
In the many-world interpretation, the result is ambiguous, http://www-physics.lbl.gov/~stapp/bp.PDF" ... and basically this is the very same problem (although now better disguised, I have to admit this).

Anyway, my favorite (to date) is Rovelli's interpretation, so I'll stop arguing this.
 
Last edited by a moderator:
  • #278
PAllen said:
Maybe you can help me. I've run across several papers (e.g. by Adler, Kent, etc.) claiming proofs that no variation of many worlds or decoherence can account for the Born probability rule. I haven't found any papers referencing these that claim to answer them in full. Do you know of any?

Thanks.
See here:
http://arxiv.org/PS_cache/arxiv/pdf/0906/0906.2718v1.pdf
 
  • #279
The reason I say the observer is irrelevant in the MWI is because decoherence occurs as a result of arbitrary interactions, not just observer interactions. This effects what we observe because in order to perform an observation, we have to physically interact with the system we are observing.
 
  • #280
Chalnoth said:
Huh? The whole reason why decoherence is able to say anything at all about the appearance of collapse is precisely because the observer is part of the system: when decoherence occurs, the observer loses information about all but one component of the wavefunction, which looks like collapse.

I think the problem is that you do not take encoding of the theory as seriously as I do. Your explanation required more complexity thatn the original observer has control of. So is what your answer, or new theory, lives not on the original observer domain. Therefor it does not address the question.

I hear what you say, about dechoerence. I don't argue with what decoherence does, I'm trying to say that I think you are missing the point I'm trying to make. Or that you simply doesn't see the point in my point so to speak, but it's the same thing.

When you consider observer+system then the environment or a big part of it, IS the observer as it monitors O+S. So the konwledge about O+S is ENCODED in the environment. Then of course with respect to this environment, or other observers that somehow has arbitrary access to the entire environments information, the observer-system interaction can be described without collapses. But you have more than one observer. Clearly there is nothing unique about subsystems. Any subsystem is any observer, but whenever you compute and expectaion and encode a theory, a single observer is used. Question posed by this observer, can not be answers by a different observer. But yes, the different observer can "explain" why the first observer asks this question and how it perceives that answer.

The expectations observer B has, on observers A interacting with system X, is obviously different than observers A intrinsic expectations. All I am saying is that expectations of observer B (corresponding to you decoherence view) does not influence the action of observer A, unless B is interacting with A; and then again you have a DIFFERENT collapse, that's not the original one.

/Fredrik
 
  • #281
Fra said:
I think the problem is that you do not take encoding of the theory as seriously as I do. Your explanation required more complexity thatn the original observer has control of. So is what your answer, or new theory, lives not on the original observer domain. Therefor it does not address the question.
In what sense? Nobody tries to consider a set of initial conditions in the MWI that includes the full wavefunction. But as long as the interactions between our world and the rest of the wavefunction are negligible, which they have to be to conform with observation, it won't effect the results anyway.

Fra said:
but whenever you compute and expectaion and encode a theory, a single observer is used. Question posed by this observer, can not be answers by a different observer. But yes, the different observer can "explain" why the first observer asks this question and how it perceives that answer.
I'm really not understanding your objection. This is precisely why the appearance of collapse forces us to only consider the probability distribution of results, as decoherence ensures that no single observer has access to the entire wave function.

Fra said:
The expectations observer B has, on observers A interacting with system X, is obviously different than observers A intrinsic expectations.
What? That's silly. The MWI reduces to the CI in the limit of complex observers. It can't predict different expectations for different observers, because CI doesn't.
 
  • #284
Chalnoth, the dicussion has become confusing. Before we go on I'd like to just restate that this detour started in post #232 where I mainly objected to your suggestion that we look for a model/theory that unravels the true nature of reality, in an observer independent way. I personally think that may be a doubtful guide, because be construction, models are always inherently observer depedent. So I think the mental image that we can ever get an exernal objective picture of some reality is wrong. And using this goal as a constraint my be misguiding.

This was my main point.

My main point isn't wasn't to debate CI be MWI, because my personal view on this that the problem is not just about interpretations, it's deeper. I think we need a new reconstruction of measurement theory. So pure interpretations of current formalism is a moot discussion for me.

But I defend some traits of CI, as I think the points of informatiom updates, and the existence of both decidable and undecidable changes, and the logic of forming an intrinsic expectations as a basis for action is essential - and will remain key points even in a reworked quantum theory.

The obvious points where CI is bad, is because QM itself is bad. No other interpretation cures it either. So I'm not discussing interpretations, I'm discussing which view is the "best" in order to improve things. Here I think MWI is trying to find an external view of the observer, in a way that explains it away - in a way that is in violation with I consider to be the principles of intrinsic inference.

I see QM as an "extrinsic information theory", where extrinsic refers either to a "classical observer" or an "infiniteley complex QM observer". This is why it makes sense only for subsystems.

What I see is a reformulation, where the theory is intrinsically formualted. Ie. a theory where all elements of the theory in principle are inferrable from a real, realistic finite observer. Some overall predictions of such a programme would be that the noton of theory is an evolving one (there IS no eternal objective realist theory) and that the interaction of physical systems is even invariant with respect to such "true theory". The systems actions are implied by the effective theories.

There are a lot of open question in this, and I don't have the answers. But I feel quite confident about the direction.

/Fredrik
 
  • #285
Chalnoth said:
In what sense? Nobody tries to consider a set of initial conditions in the MWI that includes the full wavefunction. But as long as the interactions between our world and the rest of the wavefunction are negligible, which they have to be to conform with observation, it won't effect the results anyway.I'm really not understanding your objection. This is precisely why the appearance of collapse forces us to only consider the probability distribution of results, as decoherence ensures that no single observer has access to the entire wave function.

The perspective I have is that there is one natural decomposition. The observer itself, which is defined by what hte observer knowns, AND the remainder of the universe. But the remainder of the universe does NOT mean the entire universe as we konw it in the realist sense. It means the remainder of the encodable part of the observable universe. Which means that the remainder of the universe for a proton, is probably very small! How small I can't not say at this point, but probably the expected action of a proton system at any instant of time is invariant with respect to anything happening outside the laboratory frame. so there is indeed a builting cutoff here, the cutoff is due to that it's impossible for a proton to encode information about the entire universe.

So what you admit is not posible, and seem to solve be common sense and what's "negelctable" etc, I think should be taken seriously any be accounted for in OUR human theory.

Ie. humans "theory" of say particle physics, are an external one, relative to atomic world, this is WHY the current framework did work so well, but there are missing pieces and I think the next revolution may require that we try to understand what "scaling" the theory down to subatomic observers actually does? Most certainly we will see that the interactions scale out in a way that automatically gives us unification.

But the reverse perspective is what I think is more fruitful; to start with a basal low complexity observer, and try to understand how the inference system grows as we add complexity, and see how the unified original interactions split off into the known ones.

In order to do this, we can not STICK to the external perspective (ie. classical obserer, or infinitely compelx observers, or just infinite horizons scattering matrix descriptions of clean in/out) we need to get into the domain where the setup times are so long that expectations based in uncertain theories need to be used. This is a more chaotic domain, and the expectations are interrupted before the output is collected.

Chalnoth said:
What? That's silly. The MWI reduces to the CI in the limit of complex observers. It can't predict different expectations for different observers, because CI doesn't.

CI and standard QM is not my measuring stick here. I think the problem is QM, and I my only point was that the notion of collapse, as beeing and "information update" is an essential ingredient in any theory of inference. There is no way to explain this away. Also, I simply fail to understand what the problem is with this?

An information update is not a problem, it just means that the expectation is updated.

The problem I have is that the action forms are not the result of inference in the current models, they are pulled from quatizing classical models. This is itself vere non-intrinsic. I think the information update; and actions based on expectations are key blocks to construct full expectations of actions from pure interaction historys.

Edit: Merry Xmas to everyone! :)

/Fredrik
 
  • #286
To clarify what I mean, as this is a key point for me.

Fra said:
Which means that the remainder of the universe for a proton, is probably very small! How small I can't not say at this point, but probably the expected action of a proton system at any instant of time is invariant with respect to anything happening outside the laboratory frame.

I do not mean this in the obvious approximate sense. Because this is obvious to everyone.

I mean that I think that the complexity of a proton, (one problem is how to relate complexity to energy and mass, but certainly I imply here that high confined mass ~ high cmoplexity) is what defines the PHYSICAL cutoff. This number would have to enter somewhere in the expectation computation.

The common method of cutoffs are purely ad hoc, or arbitrary. I think there is a physical motivation for this cutoff that we can understand once we take seriously, the encoding of the theory and information in matter.

So I think this cutoff is exact, it's not just a FAPP type cutoff.

/Fredrik
 
  • #287
George Jones said:
I don't know if anyone in this tread mentioned

http://arxiv.org/abs/physics/0612253.
I think the conclusion from Godel's theorem is pretty simple and not at all what most people imagine it to be. I don't think Hilbert's program is destroyed but only that classical logic fails. For a physicist this should come as no surprise because we already know this to be the case from quantum mechanics. So, Godel's theorem is relevant for a TOE in the sense that the latter has to be defined in a non-classical logic.

Careful
 
  • #288
George Jones said:
I don't know if anyone in this tread mentioned

http://arxiv.org/abs/physics/0612253.

This article reads in part as follows:

"The symbols are 0, 'zero', S, 'successor of', +, X, and =. Hence, the number two is the successor of the successor of zero, written as the term SS0, and two and plus two equals four is expressed as SS0 + SS0 = SSSS0."

These are the symbols used in the proof of Godel's Incompleteness Theorem (GIT). My question is does GIT work when a continuum is involved? At first glance it would seem not. Because then any numbers (other than zero) is constructed with an infinite number of "successor" steps of an infinitesimal difference. Thus every number is expressed with an infinite number of S's so that you can not tell one number apart from another.
 
Last edited:
  • #289
friend said:
These are the symbols used in the proof of Godel's Incompleteness Theorem (GIT). My question is does GIT work when a continuum is involved? At first glance it would seem not. Because then any two numbers have an infinite number of "successor" steps, where each step is an infinitesimal difference. Thus every number is expressed with an infinite number of S's so that you can not tell one number apart from another.
Godel's theorem is merely a mathematical masturbation of the liar's paradox which is captured by a statement of self reference like ''this statement is false'' (and likewise ''this statement is true'' leads to problems). The mathematical generalization to Turing machines and so on is just that but the deeper underlying message is that you can construct sentences for which it is impossible to determine whether they are false or true. This does not depend upon the kind of technicalities which you are suggesting.

Careful
 
  • #290
To get back to the poster's original question: another point, this time from the viewpoint of the practice of pure mathematics.

Gödel's incompleteness theorems (two of them, note) show that a consistent system cannot prove itself complete or consistent. But one would not get very far if one always moaned that arithmetic has this limitation, so in the practice of mathematics one proves that one's theory is equi-consistent with Peano Arithmetic (PA), and since PA has served us well, one simply continues on one's way with the assumption that PA, and hence anything equi-consistent with it, is consistent. Problems with an theory are expected to come not from PA, which hasn't yielded any contradictions so far, but from the extensions of PA, so this is where mathematicians concentrate their efforts. Also note that, whereas PA cannot prove itself consistent, it can be proven consistent by another theory, call it algebra. True, one then has the problem with algebra not being able to prove itself, but then this can be proven consistent with another system, and so forth. Whereas one never can prove it absolutely, the higher up one goes, the more confidence one has that, if a contradiction is there somewhere, it is pretty remote.

Also, note that Hawking's statement was referring to incompleteness (First Incompleteness Theorem) rather than provable consistency (Second Incompleteness Theorem). That a system is self-consistent is of course concern for a theory, but any inconsistency that is not easily spotted in the formalism will usually pop out eventually in experiment. Nature is often much better at spotting inconsistencies than we are. As far as completeness: so the system is not complete? So what? More's the fun. After all, that was the original statement of the EPR thought experiment: that reality is not complete.

Finally, Gödel's theorems, although they are about PA, have been extended with systems in which a continuum exists. For example, ZFC, which can produce a statement about the existence of the power set of the natural numbers, and so forth.
 
Last edited:
  • #291
Careful said:
Godel's theorem is merely a mathematical masturbation of the liar's paradox which is captured by a statement of self reference like ''this statement is false'' (and likewise ''this statement is true'' leads to problems). The mathematical generalization to Turing machines and so on is just that but the deeper underlying message is that you can construct sentences for which it is impossible to determine whether they are false or true. This does not depend upon the kind of technicalities which you are suggesting.

Careful

Just an observation that many mathematician's disagree with this minimizing of Godel. Further, there are a fair number of mathematicians who think two important unsolved problems are likely examples of Godel incompleteteness: the Golbach conjecture, and P=?NP.
 
  • #292
Careful said:
Godel's theorem is merely a mathematical masturbation
It will soon be 2011, so let me wish you that your career will include at least one thing as important as any of the Godel's theorems. In the mean time, happy new year! :smile:
 
  • #293
...there are a fair number of mathematicians who think two important unsolved problems are likely examples of Godel incompleteteness: the Golbach conjecture, and P=?NP.

The Goldbach conjecture remains a tough egg, but many mathematicians are waiting to see if the latest attempt at proving P<>NP pans out by Vinay Diolalikar from the HP Research labs in at Palo Alto labs : http://www.scribd.com/doc/35539144/pnp12pt. There are always attempts, but this one looks promising. (Unless I have missed something in the ongoing peer review.)

And, as a couple of recent posts emphasized, Gödel's theorems are extremely important. But there are some who, hearing of its importance, try to apply it where it should not be. A misinterpretation that many people fall into is the Lucas-Penrose fallacy. (Penrose has sold a lot of books based on his misinterpretation, even though it was torn apart by the logician Solomon Fefferman.) As well, some people get ontological uncertainty (as in the Heisenberg Uncertainty Principle) and epistemological uncertainty (as in Gödel's Incompleteness Theorems) mixed up.
 
  • #294
PAllen said:
Just an observation that many mathematician's disagree with this minimizing of Godel. Further, there are a fair number of mathematicians who think two important unsolved problems are likely examples of Godel incompleteteness: the Golbach conjecture, and P=?NP.
I don't minimize Godel. If I recall my history right then Godel was mainly interested in showing the inadequacy of logic and only wanted to show that something as mundaine as the liar's paradox has severe consequences for the foundations of mathematics. Godel also was firmly convinced that human beings were not computers but that things like creativity, intuition and so on were grounded in a higher kind of ''logic''. It is Turing who took the opposite side of the debate and who mainly stressed the computational aspect of Godel's work and he firmly believed that humans were simply sophisticated computers (which we clearly are not).

So, it might be that I take Godel more seriously than you do; I actually look beyond classical logic and search for a more general kind of proof method. Mostly what people do, is to take the relativist attitude and regard the axiomatic approach as fundamentally incomplete but prove consistency of one system relative to a bigger one. I think this is the wrong approach for a TOE since it is clearly so that logic is not only relational but also relies upon self reference.

Careful
 
  • #295
I don't think that the facts of reality are isomorphic to mathematical axioms. Which particle is "number one" and which particle is "number two", etc? You can always renumber them differently without affecting their existence. Which axiom applies to one event but not another? And if you can't map numbers or axioms to particles or phenomena, then Gödel's Incompleteness Theorem can not be applied, right?
 
  • #296
nomadreid said:
As well, some people get ontological uncertainty (as in the Heisenberg Uncertainty Principle) and epistemological uncertainty (as in Gödel's Incompleteness Theorems) mixed up.
Well, I would not say that some people mix it up. I hate using expensive words but ontology basically means how things are, what their ''reality'' is and epistemology is what we can know about them. Godel's theorem indeed says that we cannot know some things to be true even if they are true; but what I am saying is that this gap between ontology and epistemology is an artificial human construct due to a too limited definition of what knowledge is supposed to be. As a physicist, it is clear that these limitations in knowledge are induced by the way our perceptions impose a natural macroscopic logic upon us; that is, we have too limited access to ontology in order to have a complete epistemology. You could also turn it the other way and say that we use the wrong ontology, that for example the concept of a set with a definite number of elements somehow does not ''exist'', that an absolute empty set does not ''exist''. It is not because we name something in a particular way that it really ''exists'' in a deeper sense; actually, that is what quantum physics teaches us. Anyway, this is getting philosophical...

Careful
 
  • #297
friend said:
I don't think that the facts of reality are isomorphic to mathematical axioms. Which particle is "number one" and which particle is "number two", etc? You can always renumber them differently without affecting their existence. Which axiom applies to one event but not another? And if you can't map numbers or axioms to particles or phenomena, then Gödel's Incompleteness Theorem can not be applied, right?
I have mixed feelings about this; one day I get up and tell to myself that a fundamental theory of everything, if it exists, is one which resists rigorous definition within a fixed mathematical context and that we need at least a new kind of ''mathematics'' to proceed. That is, a ''mathematics'' of genuine creation, a theory of understanding, but I severly doubt whether such thing exists and if it will ever be in reach of human activity. On other days, I am more optimistic but certainly these considerations do not apply to something as modest as a theory of quantum gravity.

Careful
 
  • #298
Careful said:
Godel also was firmly convinced that human beings were not computers but that things like creativity, intuition and so on were grounded in a higher kind of ''logic''. It is Turing who took the opposite side of the debate and who mainly stressed the computational aspect of Godel's work and he firmly believed that humans were simply sophisticated computers (which we clearly are not).
Some people would disagree with your parenthetical, but that's a digression...


No matter how creative a human is, or how strong his intuition, a human will never create a proof that a Turing machine is incapable of discovering.

No matter how creative a human is, or how strong his intuition, a human will never create a list of postulates from which he can prove interesting things that a Turing machine is incapable of discovering.

So, it might be that I take Godel more seriously than you do; I actually look beyond classical logic and search for a more general kind of proof method.
Whatever "proof method" you consider, if you can validate a purported proof, then a Turing machine is capable of coming up with it.



The computational aspect is a rather obvious thing -- if there is a theorem that a human can discover a proof for through his intuition and cleverness, then a Turing machine is also capable of finding it by doing nothing more intelligent than brute force exhausting through all possible combinations of symbols, and checking each one to see if it's a proof of the theorem or not.


But TBH, aside from the sort of silly vague ideas that people like to philosophize about that are only loosely related to Gödel's incompleteness theorems (if at all), I've mostly seen them applied as impossibility proofs in real mathematics and computer science. The first time I was really introduced to the subject was in a theory of computation class, in a proof that there does not exist an algorithm to enumerate the true sentences in any model of integer arithmetic. Of course, the whole notion was old hat to me at the time, since we had already spent time on simpler situations, such as the halting problem.
 
  • #299
Hurkyl said:
No matter how creative a human is, or how strong his intuition, a human will never create a proof that a Turing machine is incapable of discovering.

No matter how creative a human is, or how strong his intuition, a human will never create a list of postulates from which he can prove interesting things that a Turing machine is incapable of discovering.
That's not the point, I have just commented on that on my personal page. I do not feel any compelling need to ''disprove'' strong AI (whatever that means in a generalized logic), nor do I need to ''prove'' my position. All I am saying is that my strategy is more plausible than yours, a machine will never discover anything unless the seeds for this discovery are already ingrained in it's algorithm. So you will have to systematically add new elements to your algorithm for the latter to be reasonable capable of doing what you already know exists. The point is that I conjecture that machines made by men will never ever create something which gets even close to what human imagination can achieve; that is sufficiently good for me not to adhere to your position.

Hurkyl said:
Whatever "proof method" you consider, if you can validate a purported proof, then a Turing machine is capable of coming up with it.
I don't know, can you prove this ? Penrose gives plausible arguments why this could be doubted. Moreover, and this is the point, the machine will never ever cook up the proof method by itself.

Hurkyl said:
The computational aspect is a rather obvious thing -- if there is a theorem that a human can discover a proof for through his intuition and cleverness, then a Turing machine is also capable of finding it by doing nothing more intelligent than brute force exhausting through all possible combinations of symbols, and checking each one to see if it's a proof of the theorem or not.
Again, this may be false, there is no proof of that. See my previous comment.

Hurkyl said:
But TBH, aside from the sort of silly vague ideas that people like to philosophize about that are only loosely related to Gödel's incompleteness theorems (if at all), I've mostly seen them applied as impossibility proofs in real mathematics and computer science. The first time I was really introduced to the subject was in a theory of computation class, in a proof that there does not exist an algorithm to enumerate the true sentences in any model of integer arithmetic.
This is not how Godel thought about it, surely you do not want to imply that he was silly. BTW: I also learned the Turing version first and read the real history about it only many years later.
 
  • #300
Careful said:
This is not how Godel thought about it, surely you do not want to imply that he was silly. BTW: I also learned the Turing version first and read the real history about it only many years later.
Actually, I do -- as I recall my history, Gödel had some rather... odd ideas.

But, in any case, history is as history does. Just because Gödel, Einstein, or anyone else is a prominent historical figure in their field does not mean their opinions are right, and that one should dismiss decades of progress simply because the subsequent work (appears to) disagree with the historical figure's point of view.


Moreover, and this is the point, the machine will never ever cook up the proof method by itself.
You sure?

The Turing machine can certainly enumerate all machines -- in particular, it will eventually cook up any machine that implements said proof method.

And a human who is considering proof methods has to have a way to decide which ones are good. If there is any algorithm for making the decision on whether or not a particular proof method is viable, then the aforementioned Turing machine will not only find it, but say "hey, this is a good one!"


I think you underestimate just how much force is available to brute force when there aren't practical constraints. :smile:
 

Similar threads

Replies
10
Views
4K
Replies
20
Views
4K
Replies
1
Views
972
Replies
24
Views
3K
Replies
4
Views
418
Replies
6
Views
3K
Back
Top