Impact of Gödel's incompleteness theorems on a TOE

  • Thread starter Thread starter PhysDrew
  • Start date Start date
  • Tags Tags
    Impact Toe
  • #271
Fra said:
Hmm... ok, maybe I jumped into conclusions. I was basing my response on what I thought you would say.

So maybe we take a step back. What did you refer to with

"when we have a competing theory that fully explains the physical behavior in question"

I based my response of what Ithought you meant, but maybe I was mistaken.

/Fredrik
What I meant is that quantum decoherence fully explains the appearance of collapse, and reduces to the Copenhagen interpretation in the limit of complete decoherence. Thus the many worlds interpretation makes the same predictions as the Copenhagen interpretation in all experiments far from the boundary of collapse. But what's more, because the description of the appearance of collapse is exact, decoherence makes predictions about experiments at the boundary of collapse, while the Copenhagen interpretation does not.
 
Physics news on Phys.org
  • #272
Chalnoth said:
What I meant is that quantum decoherence fully explains the appearance of collapse, and reduces to the Copenhagen interpretation in the limit of complete decoherence. Thus the many worlds interpretation makes the same predictions as the Copenhagen interpretation in all experiments far from the boundary of collapse. But what's more, because the description of the appearance of collapse is exact, decoherence makes predictions about experiments at the boundary of collapse, while the Copenhagen interpretation does not.

Ok, that was exactly what I thought you meant.

So clearly we disagree about our views on this. I certainly understand decoherence and it is partly right, I mean there is nothing more wrong about decoherence than anything else, but it does not answer the same question, to which the collapse is the answer. This is what I tried to say above.

Maybe I might try to explain again, but OTOH, I am not sure if it helps if we simply disagree.

Let me put it like this, if you accept the environment as an infintie information sink etc then sure decoherence sort of does resolve the collapse, but that construction doesn't help, if the actual observer is part of the system, which it is. So I am convince that those who are satisfied with decoherence, really do not see the same problem as I do.

I'm not saying decoherence is baloney, it's obviously not. The decoherence mechanism as such is correct, but it is posing a difference question, but yet pretends to answer the original one, which it didn't.

/Fredrik
 
  • #273
Fra said:
Let me put it like this, if you accept the environment as an infintie information sink etc then sure decoherence sort of does resolve the collapse,
The environment is certainly not an infinite information sink. But it is enough of one that it might as well be infinite for the majority of situations, as even for moderately-sized interacting systems interference times rapidly grow beyond the age of the universe.

Fra said:
but that construction doesn't help, if the actual observer is part of the system,
Huh? The whole reason why decoherence is able to say anything at all about the appearance of collapse is precisely because the observer is part of the system: when decoherence occurs, the observer loses information about all but one component of the wavefunction, which looks like collapse.
 
  • #274
Chalnoth said:
The whole reason why decoherence is able to say anything at all about the appearance of collapse is precisely because the observer is part of the system
Exactly. But when you allow yourself to put the observer in, then you behave exactly as anyone using Copenhagen interpretation. In other words, decoherence better hide the problem you saw with CI, but does not solve it.
 
  • #275
Lievo said:
Exactly. But when you allow yourself to put the observer in, then you behave exactly as anyone using Copenhagen interpretation. In other words, decoherence better hide the problem you saw with CI, but does not solve it.
This is incorrect. In the many worlds interpretation, the observer is completely irrelevant. The appearance of collapse merely stems from interactions between systems. So the way this is dealt with is you set up an experiment that slowly turns on an interaction but doesn't perform any sort of measurement using that interaction. Later measurements are performed to see whether or not the wavefunction collapsed.

In the Copenhagen interpretation, the result is ambiguous, because it is completely unspecified whether turning on an interaction without performing a measurement will do anything to the wave function. But in the many worlds interpretation, the result is definite and exact.
 
  • #276
Chalnoth said:
This is incorrect. In the many worlds interpretation, the observer is completely irrelevant. The appearance of collapse merely stems from interactions between systems. So the way this is dealt with is you set up an experiment that slowly turns on an interaction but doesn't perform any sort of measurement using that interaction. Later measurements are performed to see whether or not the wavefunction collapsed.

In the Copenhagen interpretation, the result is ambiguous, because it is completely unspecified whether turning on an interaction without performing a measurement will do anything to the wave function. But in the many worlds interpretation, the result is definite and exact.

Maybe you can help me. I've run across several papers (e.g. by Adler, Kent, etc.) claiming proofs that no variation of many worlds or decoherence can account for the Born probability rule. I haven't found any papers referencing these that claim to answer them in full. Do you know of any?

Thanks.
 
  • #277
Chalnoth said:
The whole reason why decoherence is able to say anything at all about the appearance of collapse is precisely because the observer is part of the system
Chalnoth said:
In the many worlds interpretation, the observer is completely irrelevant.
*cought* *cought*

Chalnoth said:
In the Copenhagen interpretation, the result is ambiguous, because (...)
In the many-world interpretation, the result is ambiguous, http://www-physics.lbl.gov/~stapp/bp.PDF" ... and basically this is the very same problem (although now better disguised, I have to admit this).

Anyway, my favorite (to date) is Rovelli's interpretation, so I'll stop arguing this.
 
Last edited by a moderator:
  • #278
PAllen said:
Maybe you can help me. I've run across several papers (e.g. by Adler, Kent, etc.) claiming proofs that no variation of many worlds or decoherence can account for the Born probability rule. I haven't found any papers referencing these that claim to answer them in full. Do you know of any?

Thanks.
See here:
http://arxiv.org/PS_cache/arxiv/pdf/0906/0906.2718v1.pdf
 
  • #279
The reason I say the observer is irrelevant in the MWI is because decoherence occurs as a result of arbitrary interactions, not just observer interactions. This effects what we observe because in order to perform an observation, we have to physically interact with the system we are observing.
 
  • #280
Chalnoth said:
Huh? The whole reason why decoherence is able to say anything at all about the appearance of collapse is precisely because the observer is part of the system: when decoherence occurs, the observer loses information about all but one component of the wavefunction, which looks like collapse.

I think the problem is that you do not take encoding of the theory as seriously as I do. Your explanation required more complexity thatn the original observer has control of. So is what your answer, or new theory, lives not on the original observer domain. Therefor it does not address the question.

I hear what you say, about dechoerence. I don't argue with what decoherence does, I'm trying to say that I think you are missing the point I'm trying to make. Or that you simply doesn't see the point in my point so to speak, but it's the same thing.

When you consider observer+system then the environment or a big part of it, IS the observer as it monitors O+S. So the konwledge about O+S is ENCODED in the environment. Then of course with respect to this environment, or other observers that somehow has arbitrary access to the entire environments information, the observer-system interaction can be described without collapses. But you have more than one observer. Clearly there is nothing unique about subsystems. Any subsystem is any observer, but whenever you compute and expectaion and encode a theory, a single observer is used. Question posed by this observer, can not be answers by a different observer. But yes, the different observer can "explain" why the first observer asks this question and how it perceives that answer.

The expectations observer B has, on observers A interacting with system X, is obviously different than observers A intrinsic expectations. All I am saying is that expectations of observer B (corresponding to you decoherence view) does not influence the action of observer A, unless B is interacting with A; and then again you have a DIFFERENT collapse, that's not the original one.

/Fredrik
 
  • #281
Fra said:
I think the problem is that you do not take encoding of the theory as seriously as I do. Your explanation required more complexity thatn the original observer has control of. So is what your answer, or new theory, lives not on the original observer domain. Therefor it does not address the question.
In what sense? Nobody tries to consider a set of initial conditions in the MWI that includes the full wavefunction. But as long as the interactions between our world and the rest of the wavefunction are negligible, which they have to be to conform with observation, it won't effect the results anyway.

Fra said:
but whenever you compute and expectaion and encode a theory, a single observer is used. Question posed by this observer, can not be answers by a different observer. But yes, the different observer can "explain" why the first observer asks this question and how it perceives that answer.
I'm really not understanding your objection. This is precisely why the appearance of collapse forces us to only consider the probability distribution of results, as decoherence ensures that no single observer has access to the entire wave function.

Fra said:
The expectations observer B has, on observers A interacting with system X, is obviously different than observers A intrinsic expectations.
What? That's silly. The MWI reduces to the CI in the limit of complex observers. It can't predict different expectations for different observers, because CI doesn't.
 
  • #284
Chalnoth, the dicussion has become confusing. Before we go on I'd like to just restate that this detour started in post #232 where I mainly objected to your suggestion that we look for a model/theory that unravels the true nature of reality, in an observer independent way. I personally think that may be a doubtful guide, because be construction, models are always inherently observer depedent. So I think the mental image that we can ever get an exernal objective picture of some reality is wrong. And using this goal as a constraint my be misguiding.

This was my main point.

My main point isn't wasn't to debate CI be MWI, because my personal view on this that the problem is not just about interpretations, it's deeper. I think we need a new reconstruction of measurement theory. So pure interpretations of current formalism is a moot discussion for me.

But I defend some traits of CI, as I think the points of informatiom updates, and the existence of both decidable and undecidable changes, and the logic of forming an intrinsic expectations as a basis for action is essential - and will remain key points even in a reworked quantum theory.

The obvious points where CI is bad, is because QM itself is bad. No other interpretation cures it either. So I'm not discussing interpretations, I'm discussing which view is the "best" in order to improve things. Here I think MWI is trying to find an external view of the observer, in a way that explains it away - in a way that is in violation with I consider to be the principles of intrinsic inference.

I see QM as an "extrinsic information theory", where extrinsic refers either to a "classical observer" or an "infiniteley complex QM observer". This is why it makes sense only for subsystems.

What I see is a reformulation, where the theory is intrinsically formualted. Ie. a theory where all elements of the theory in principle are inferrable from a real, realistic finite observer. Some overall predictions of such a programme would be that the noton of theory is an evolving one (there IS no eternal objective realist theory) and that the interaction of physical systems is even invariant with respect to such "true theory". The systems actions are implied by the effective theories.

There are a lot of open question in this, and I don't have the answers. But I feel quite confident about the direction.

/Fredrik
 
  • #285
Chalnoth said:
In what sense? Nobody tries to consider a set of initial conditions in the MWI that includes the full wavefunction. But as long as the interactions between our world and the rest of the wavefunction are negligible, which they have to be to conform with observation, it won't effect the results anyway.I'm really not understanding your objection. This is precisely why the appearance of collapse forces us to only consider the probability distribution of results, as decoherence ensures that no single observer has access to the entire wave function.

The perspective I have is that there is one natural decomposition. The observer itself, which is defined by what hte observer knowns, AND the remainder of the universe. But the remainder of the universe does NOT mean the entire universe as we konw it in the realist sense. It means the remainder of the encodable part of the observable universe. Which means that the remainder of the universe for a proton, is probably very small! How small I can't not say at this point, but probably the expected action of a proton system at any instant of time is invariant with respect to anything happening outside the laboratory frame. so there is indeed a builting cutoff here, the cutoff is due to that it's impossible for a proton to encode information about the entire universe.

So what you admit is not posible, and seem to solve be common sense and what's "negelctable" etc, I think should be taken seriously any be accounted for in OUR human theory.

Ie. humans "theory" of say particle physics, are an external one, relative to atomic world, this is WHY the current framework did work so well, but there are missing pieces and I think the next revolution may require that we try to understand what "scaling" the theory down to subatomic observers actually does? Most certainly we will see that the interactions scale out in a way that automatically gives us unification.

But the reverse perspective is what I think is more fruitful; to start with a basal low complexity observer, and try to understand how the inference system grows as we add complexity, and see how the unified original interactions split off into the known ones.

In order to do this, we can not STICK to the external perspective (ie. classical obserer, or infinitely compelx observers, or just infinite horizons scattering matrix descriptions of clean in/out) we need to get into the domain where the setup times are so long that expectations based in uncertain theories need to be used. This is a more chaotic domain, and the expectations are interrupted before the output is collected.

Chalnoth said:
What? That's silly. The MWI reduces to the CI in the limit of complex observers. It can't predict different expectations for different observers, because CI doesn't.

CI and standard QM is not my measuring stick here. I think the problem is QM, and I my only point was that the notion of collapse, as beeing and "information update" is an essential ingredient in any theory of inference. There is no way to explain this away. Also, I simply fail to understand what the problem is with this?

An information update is not a problem, it just means that the expectation is updated.

The problem I have is that the action forms are not the result of inference in the current models, they are pulled from quatizing classical models. This is itself vere non-intrinsic. I think the information update; and actions based on expectations are key blocks to construct full expectations of actions from pure interaction historys.

Edit: Merry Xmas to everyone! :)

/Fredrik
 
  • #286
To clarify what I mean, as this is a key point for me.

Fra said:
Which means that the remainder of the universe for a proton, is probably very small! How small I can't not say at this point, but probably the expected action of a proton system at any instant of time is invariant with respect to anything happening outside the laboratory frame.

I do not mean this in the obvious approximate sense. Because this is obvious to everyone.

I mean that I think that the complexity of a proton, (one problem is how to relate complexity to energy and mass, but certainly I imply here that high confined mass ~ high cmoplexity) is what defines the PHYSICAL cutoff. This number would have to enter somewhere in the expectation computation.

The common method of cutoffs are purely ad hoc, or arbitrary. I think there is a physical motivation for this cutoff that we can understand once we take seriously, the encoding of the theory and information in matter.

So I think this cutoff is exact, it's not just a FAPP type cutoff.

/Fredrik
 
  • #287
George Jones said:
I don't know if anyone in this tread mentioned

http://arxiv.org/abs/physics/0612253.
I think the conclusion from Godel's theorem is pretty simple and not at all what most people imagine it to be. I don't think Hilbert's program is destroyed but only that classical logic fails. For a physicist this should come as no surprise because we already know this to be the case from quantum mechanics. So, Godel's theorem is relevant for a TOE in the sense that the latter has to be defined in a non-classical logic.

Careful
 
  • #288
George Jones said:
I don't know if anyone in this tread mentioned

http://arxiv.org/abs/physics/0612253.

This article reads in part as follows:

"The symbols are 0, 'zero', S, 'successor of', +, X, and =. Hence, the number two is the successor of the successor of zero, written as the term SS0, and two and plus two equals four is expressed as SS0 + SS0 = SSSS0."

These are the symbols used in the proof of Godel's Incompleteness Theorem (GIT). My question is does GIT work when a continuum is involved? At first glance it would seem not. Because then any numbers (other than zero) is constructed with an infinite number of "successor" steps of an infinitesimal difference. Thus every number is expressed with an infinite number of S's so that you can not tell one number apart from another.
 
Last edited:
  • #289
friend said:
These are the symbols used in the proof of Godel's Incompleteness Theorem (GIT). My question is does GIT work when a continuum is involved? At first glance it would seem not. Because then any two numbers have an infinite number of "successor" steps, where each step is an infinitesimal difference. Thus every number is expressed with an infinite number of S's so that you can not tell one number apart from another.
Godel's theorem is merely a mathematical masturbation of the liar's paradox which is captured by a statement of self reference like ''this statement is false'' (and likewise ''this statement is true'' leads to problems). The mathematical generalization to Turing machines and so on is just that but the deeper underlying message is that you can construct sentences for which it is impossible to determine whether they are false or true. This does not depend upon the kind of technicalities which you are suggesting.

Careful
 
  • #290
To get back to the poster's original question: another point, this time from the viewpoint of the practice of pure mathematics.

Gödel's incompleteness theorems (two of them, note) show that a consistent system cannot prove itself complete or consistent. But one would not get very far if one always moaned that arithmetic has this limitation, so in the practice of mathematics one proves that one's theory is equi-consistent with Peano Arithmetic (PA), and since PA has served us well, one simply continues on one's way with the assumption that PA, and hence anything equi-consistent with it, is consistent. Problems with an theory are expected to come not from PA, which hasn't yielded any contradictions so far, but from the extensions of PA, so this is where mathematicians concentrate their efforts. Also note that, whereas PA cannot prove itself consistent, it can be proven consistent by another theory, call it algebra. True, one then has the problem with algebra not being able to prove itself, but then this can be proven consistent with another system, and so forth. Whereas one never can prove it absolutely, the higher up one goes, the more confidence one has that, if a contradiction is there somewhere, it is pretty remote.

Also, note that Hawking's statement was referring to incompleteness (First Incompleteness Theorem) rather than provable consistency (Second Incompleteness Theorem). That a system is self-consistent is of course concern for a theory, but any inconsistency that is not easily spotted in the formalism will usually pop out eventually in experiment. Nature is often much better at spotting inconsistencies than we are. As far as completeness: so the system is not complete? So what? More's the fun. After all, that was the original statement of the EPR thought experiment: that reality is not complete.

Finally, Gödel's theorems, although they are about PA, have been extended with systems in which a continuum exists. For example, ZFC, which can produce a statement about the existence of the power set of the natural numbers, and so forth.
 
Last edited:
  • #291
Careful said:
Godel's theorem is merely a mathematical masturbation of the liar's paradox which is captured by a statement of self reference like ''this statement is false'' (and likewise ''this statement is true'' leads to problems). The mathematical generalization to Turing machines and so on is just that but the deeper underlying message is that you can construct sentences for which it is impossible to determine whether they are false or true. This does not depend upon the kind of technicalities which you are suggesting.

Careful

Just an observation that many mathematician's disagree with this minimizing of Godel. Further, there are a fair number of mathematicians who think two important unsolved problems are likely examples of Godel incompleteteness: the Golbach conjecture, and P=?NP.
 
  • #292
Careful said:
Godel's theorem is merely a mathematical masturbation
It will soon be 2011, so let me wish you that your career will include at least one thing as important as any of the Godel's theorems. In the mean time, happy new year! :smile:
 
  • #293
...there are a fair number of mathematicians who think two important unsolved problems are likely examples of Godel incompleteteness: the Golbach conjecture, and P=?NP.

The Goldbach conjecture remains a tough egg, but many mathematicians are waiting to see if the latest attempt at proving P<>NP pans out by Vinay Diolalikar from the HP Research labs in at Palo Alto labs : http://www.scribd.com/doc/35539144/pnp12pt. There are always attempts, but this one looks promising. (Unless I have missed something in the ongoing peer review.)

And, as a couple of recent posts emphasized, Gödel's theorems are extremely important. But there are some who, hearing of its importance, try to apply it where it should not be. A misinterpretation that many people fall into is the Lucas-Penrose fallacy. (Penrose has sold a lot of books based on his misinterpretation, even though it was torn apart by the logician Solomon Fefferman.) As well, some people get ontological uncertainty (as in the Heisenberg Uncertainty Principle) and epistemological uncertainty (as in Gödel's Incompleteness Theorems) mixed up.
 
  • #294
PAllen said:
Just an observation that many mathematician's disagree with this minimizing of Godel. Further, there are a fair number of mathematicians who think two important unsolved problems are likely examples of Godel incompleteteness: the Golbach conjecture, and P=?NP.
I don't minimize Godel. If I recall my history right then Godel was mainly interested in showing the inadequacy of logic and only wanted to show that something as mundaine as the liar's paradox has severe consequences for the foundations of mathematics. Godel also was firmly convinced that human beings were not computers but that things like creativity, intuition and so on were grounded in a higher kind of ''logic''. It is Turing who took the opposite side of the debate and who mainly stressed the computational aspect of Godel's work and he firmly believed that humans were simply sophisticated computers (which we clearly are not).

So, it might be that I take Godel more seriously than you do; I actually look beyond classical logic and search for a more general kind of proof method. Mostly what people do, is to take the relativist attitude and regard the axiomatic approach as fundamentally incomplete but prove consistency of one system relative to a bigger one. I think this is the wrong approach for a TOE since it is clearly so that logic is not only relational but also relies upon self reference.

Careful
 
  • #295
I don't think that the facts of reality are isomorphic to mathematical axioms. Which particle is "number one" and which particle is "number two", etc? You can always renumber them differently without affecting their existence. Which axiom applies to one event but not another? And if you can't map numbers or axioms to particles or phenomena, then Gödel's Incompleteness Theorem can not be applied, right?
 
  • #296
nomadreid said:
As well, some people get ontological uncertainty (as in the Heisenberg Uncertainty Principle) and epistemological uncertainty (as in Gödel's Incompleteness Theorems) mixed up.
Well, I would not say that some people mix it up. I hate using expensive words but ontology basically means how things are, what their ''reality'' is and epistemology is what we can know about them. Godel's theorem indeed says that we cannot know some things to be true even if they are true; but what I am saying is that this gap between ontology and epistemology is an artificial human construct due to a too limited definition of what knowledge is supposed to be. As a physicist, it is clear that these limitations in knowledge are induced by the way our perceptions impose a natural macroscopic logic upon us; that is, we have too limited access to ontology in order to have a complete epistemology. You could also turn it the other way and say that we use the wrong ontology, that for example the concept of a set with a definite number of elements somehow does not ''exist'', that an absolute empty set does not ''exist''. It is not because we name something in a particular way that it really ''exists'' in a deeper sense; actually, that is what quantum physics teaches us. Anyway, this is getting philosophical...

Careful
 
  • #297
friend said:
I don't think that the facts of reality are isomorphic to mathematical axioms. Which particle is "number one" and which particle is "number two", etc? You can always renumber them differently without affecting their existence. Which axiom applies to one event but not another? And if you can't map numbers or axioms to particles or phenomena, then Gödel's Incompleteness Theorem can not be applied, right?
I have mixed feelings about this; one day I get up and tell to myself that a fundamental theory of everything, if it exists, is one which resists rigorous definition within a fixed mathematical context and that we need at least a new kind of ''mathematics'' to proceed. That is, a ''mathematics'' of genuine creation, a theory of understanding, but I severly doubt whether such thing exists and if it will ever be in reach of human activity. On other days, I am more optimistic but certainly these considerations do not apply to something as modest as a theory of quantum gravity.

Careful
 
  • #298
Careful said:
Godel also was firmly convinced that human beings were not computers but that things like creativity, intuition and so on were grounded in a higher kind of ''logic''. It is Turing who took the opposite side of the debate and who mainly stressed the computational aspect of Godel's work and he firmly believed that humans were simply sophisticated computers (which we clearly are not).
Some people would disagree with your parenthetical, but that's a digression...


No matter how creative a human is, or how strong his intuition, a human will never create a proof that a Turing machine is incapable of discovering.

No matter how creative a human is, or how strong his intuition, a human will never create a list of postulates from which he can prove interesting things that a Turing machine is incapable of discovering.

So, it might be that I take Godel more seriously than you do; I actually look beyond classical logic and search for a more general kind of proof method.
Whatever "proof method" you consider, if you can validate a purported proof, then a Turing machine is capable of coming up with it.



The computational aspect is a rather obvious thing -- if there is a theorem that a human can discover a proof for through his intuition and cleverness, then a Turing machine is also capable of finding it by doing nothing more intelligent than brute force exhausting through all possible combinations of symbols, and checking each one to see if it's a proof of the theorem or not.


But TBH, aside from the sort of silly vague ideas that people like to philosophize about that are only loosely related to Gödel's incompleteness theorems (if at all), I've mostly seen them applied as impossibility proofs in real mathematics and computer science. The first time I was really introduced to the subject was in a theory of computation class, in a proof that there does not exist an algorithm to enumerate the true sentences in any model of integer arithmetic. Of course, the whole notion was old hat to me at the time, since we had already spent time on simpler situations, such as the halting problem.
 
  • #299
Hurkyl said:
No matter how creative a human is, or how strong his intuition, a human will never create a proof that a Turing machine is incapable of discovering.

No matter how creative a human is, or how strong his intuition, a human will never create a list of postulates from which he can prove interesting things that a Turing machine is incapable of discovering.
That's not the point, I have just commented on that on my personal page. I do not feel any compelling need to ''disprove'' strong AI (whatever that means in a generalized logic), nor do I need to ''prove'' my position. All I am saying is that my strategy is more plausible than yours, a machine will never discover anything unless the seeds for this discovery are already ingrained in it's algorithm. So you will have to systematically add new elements to your algorithm for the latter to be reasonable capable of doing what you already know exists. The point is that I conjecture that machines made by men will never ever create something which gets even close to what human imagination can achieve; that is sufficiently good for me not to adhere to your position.

Hurkyl said:
Whatever "proof method" you consider, if you can validate a purported proof, then a Turing machine is capable of coming up with it.
I don't know, can you prove this ? Penrose gives plausible arguments why this could be doubted. Moreover, and this is the point, the machine will never ever cook up the proof method by itself.

Hurkyl said:
The computational aspect is a rather obvious thing -- if there is a theorem that a human can discover a proof for through his intuition and cleverness, then a Turing machine is also capable of finding it by doing nothing more intelligent than brute force exhausting through all possible combinations of symbols, and checking each one to see if it's a proof of the theorem or not.
Again, this may be false, there is no proof of that. See my previous comment.

Hurkyl said:
But TBH, aside from the sort of silly vague ideas that people like to philosophize about that are only loosely related to Gödel's incompleteness theorems (if at all), I've mostly seen them applied as impossibility proofs in real mathematics and computer science. The first time I was really introduced to the subject was in a theory of computation class, in a proof that there does not exist an algorithm to enumerate the true sentences in any model of integer arithmetic.
This is not how Godel thought about it, surely you do not want to imply that he was silly. BTW: I also learned the Turing version first and read the real history about it only many years later.
 
  • #300
Careful said:
This is not how Godel thought about it, surely you do not want to imply that he was silly. BTW: I also learned the Turing version first and read the real history about it only many years later.
Actually, I do -- as I recall my history, Gödel had some rather... odd ideas.

But, in any case, history is as history does. Just because Gödel, Einstein, or anyone else is a prominent historical figure in their field does not mean their opinions are right, and that one should dismiss decades of progress simply because the subsequent work (appears to) disagree with the historical figure's point of view.


Moreover, and this is the point, the machine will never ever cook up the proof method by itself.
You sure?

The Turing machine can certainly enumerate all machines -- in particular, it will eventually cook up any machine that implements said proof method.

And a human who is considering proof methods has to have a way to decide which ones are good. If there is any algorithm for making the decision on whether or not a particular proof method is viable, then the aforementioned Turing machine will not only find it, but say "hey, this is a good one!"


I think you underestimate just how much force is available to brute force when there aren't practical constraints. :smile:
 

Similar threads

  • · Replies 10 ·
Replies
10
Views
4K
  • · Replies 20 ·
Replies
20
Views
5K
  • · Replies 1 ·
Replies
1
Views
1K
  • · Replies 24 ·
Replies
24
Views
3K
Replies
2
Views
2K
  • · Replies 16 ·
Replies
16
Views
3K
Replies
18
Views
3K
Replies
8
Views
3K
  • · Replies 6 ·
Replies
6
Views
3K
  • · Replies 60 ·
3
Replies
60
Views
7K