Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

I A twist in the Godel sentence

  1. Apr 19, 2017 #1

    Demystifier

    User Avatar
    Science Advisor

    Consider the following sentence:
    "Prof. Godel cannot prove this sentence."
    Is this sentence true? If it is true, can Prof. Godel prove it? If he can't, does it tell us anything about Prof. Godel? More generally, does it tell as anything about creative human mathematicians (who are supposed to be more than formal machines)?

    A further (obvious) twist is the sentence:
    "Nobody can prove this sentence."
    Is this sentence true? If it is, how do you know?

    The questions are, of course, philosophical, but Godel himself was thinking a lot about philosophical consequences of his theorems, so we can too.
     
  2. jcsd
  3. Apr 19, 2017 #2

    mfb

    User Avatar
    2017 Award

    Staff: Mentor

    This sentence is true, Gödel cannot prove it because he is dead.
    I'm not sure if we can make that statement mathematically well-defined.
     
  4. Apr 19, 2017 #3

    Demystifier

    User Avatar
    Science Advisor

    You are missing the point. Replace "Prof. Godel" with "The best living logician".
     
  5. Apr 19, 2017 #4

    Demystifier

    User Avatar
    Science Advisor

    Or even better, suppose that a great logician says:
    "I cannot convince myself that this sentence is true."
     
  6. Apr 19, 2017 #5
    Me neither. I think to really investigate the nature of this paradox (if it is something new or if it reduces to one that we already know about) we'd need a more precise way to formulate it.
     
  7. Apr 20, 2017 #6

    Demystifier

    User Avatar
    Science Advisor

    Last edited by a moderator: May 8, 2017
  8. Apr 20, 2017 #7
    oh, I have no problem with that. It's just that I find the question interesting and my curiosity leads me to seek a formal investigation. I think your avatar would agree! :)
     
  9. Apr 20, 2017 #8

    Demystifier

    User Avatar
    Science Advisor

    He certainly would, he is famous for works in both formal logic and plain English philosophy. :smile:
     
  10. Apr 20, 2017 #9

    Demystifier

    User Avatar
    Science Advisor

    The standard Godel sentence can be expressed in plain english as:
    "This sentence cannot be proved within a certain closed system of axioms".
    This sentence is true and can be proved, but only if one steps out from this closed system of axioms.

    It seems to me that my sentences in the posts above are not like this sentence. Instead, they seem to be more like inconsistent sentences such as:
    "This sentence is not true."

    If I am right, then this seems to resolve the paradox.
     
  11. Apr 27, 2017 #10

    stevendaryl

    User Avatar
    Staff Emeritus
    Science Advisor

    I think you should use Penrose instead of Godel, because he's alive, and because he seems to think that Godelian arguments can be used to prove that human mathematical abilities are beyond those of machines.

    Let's use "unassailably true", because I think that's the phrase Penrose uses in "The Emperor's New Mind". Something is unassailably true if it is true, and it's possible for a competent mathematician to convince himself that it is true, and that there is no doubt possible. For example: "2+2=4". Then define the Penrose sentence to be:

    "Penrose will never accept this sentence to be unassailably true"

    I do not agree that this is an antinomy such as the Liar paradox---"This sentence is not true"---because it is possible for it to be true, and it's also possible for it to be false (while the Liar paradox cannot consistently be either). It can be true, and yet Penrose never accepts it (there are true statements that he never accepts as unassailably true). Or it can be false (there might be false statements that he mistakenly believes are unassailable truths).

    I guess what you could say about it is that Penrose would never accept it to be unassailably true, since it is an obvious mistake to do so. But suppose we add some "closure" properties to his set of unassailable truths: Suppose we assume the following:

    If Penrose considers statements [itex]S_1, S_2, ....[/itex] to all be unassailable, and [itex]C[/itex] follows logically from those statements, then he will eventually come to believe [itex]C[/itex] to be unassailably true, as well. If his unassailable beliefs don't have this property, then they might be logically inconsistent, but still paraconsistent---A paraconsistent theory is one that is still useful even if it's logically inconsistent; the inconsistency doesn't make the whole theory useless. I actually think that humans are paraconsistent, but not consistent.

    If we assume that Penrose' unassailable beliefs are logically closed, then he might not realize immediately that other unassailable beliefs of his logically imply the Penrose sentence.
     
  12. Apr 27, 2017 #11

    stevendaryl

    User Avatar
    Staff Emeritus
    Science Advisor

    Here's another take on what @Demystifier is saying.

    Suppose that we strap Roger Penrose into a chair and stick probes into his brain, and suppose that we have a theory of brain functioning that is complete enough that we can monitor his thinking. Then we do the following experiment:
    1. We project various statements onto a screen and ask Penrose to read each statement and decide whether he agrees with the statement or not.
    2. If he agrees, then our brain monitor will know it, and will turn on a green indicator light.
    3. If he disagrees, then our brain monitor will turn on a red indicator light.
    4. Initially, both indicator lights are off.
    Now, suppose that the statement that is projected onto the screen is:
    • The green indicator light will not turn on.
    In contrast with the Liar paradox, the statement that Penrose is being asked about has a completely clear semantics; there are no philosophical terms such "truth".

    But if the machinery works the way I've described, and Penrose knows that, then will the green indicator light come on, or not?
     
  13. Apr 27, 2017 #12

    stevendaryl

    User Avatar
    Staff Emeritus
    Science Advisor

    He might suspect that it is true, even though he can't give an airtight convincing argument.
     
  14. Apr 29, 2017 #13
    Can Penrose fake what he believes (vis a vis the machine)?
     
  15. Apr 29, 2017 #14

    stevendaryl

    User Avatar
    Staff Emeritus
    Science Advisor

    Well, the assumption is that the machine is accurate. You can certainly avoid paradox by saying that any such machine must be (at times) inaccurate.
     
  16. May 1, 2017 #15
    What I was getting at is e.g.: Penrose knows that, but when the machine is turned on Penrose has managed to believe the green light will come on iff Goldbach's conjecture is true. What controls do we have over Penrose's beliefs?
     
  17. May 1, 2017 #16

    stevendaryl

    User Avatar
    Staff Emeritus
    Science Advisor

    With the usual Godel proof, the conclusion is:

    "If the system is sound (never proves false statements) then the Godel sentence is a true statement that the system fails to prove."

    The corresponding case for Penrose would be:

    "If his beliefs are sound, then this is a true statement that he can never know to be true."

    So his beliefs being unsound is a loophole. In the case of formal theories, you can strengthen the theorem to "If the system is consistent (never proves a contradiction) then the Godel sentence is neither provable nor refutable". I don't see off the bat how to get a corresponding Penrose statement.
     
  18. May 1, 2017 #17
    I have a problem of what constitutes a sound belief. My guess is that it would be defined as one that doesn't believe in something false.
    So if he believes that the green light will come on iff Goldbach's conjecture is true, then is that a sound belief?
     
  19. May 1, 2017 #18

    stevendaryl

    User Avatar
    Staff Emeritus
    Science Advisor

    I would have to know whether Goldbach's conjecture is true to know whether that belief is sound.
     
  20. May 1, 2017 #19
    I think I'm losing soundness of mind.
    Let's assume B.c. is decidable.
    So if you knew B.c. is true you would know that belief is sound.
    If you knew B.c. was false would you know that belief is sound?
    If so, then, then you would know that belief is sound whether or not B.c. is true.
     
  21. May 1, 2017 #20

    stevendaryl

    User Avatar
    Staff Emeritus
    Science Advisor

    By "B.C." do you mean "Goldbach's conjecture"? If so, that's a strange acronym for it.

    By "that belief", which belief do you mean?

    Let's label the various claims:
    • C1 = Goldbach's conjecture
    • C2 = The green light will come on (indicating Penrose believes whatever statement is printed on the screen).
    • C3 = C2 [itex]\Leftrightarrow[/itex] C1
    • C4 = whatever statement is shown on the screen
    So you were asking what if Penrose believes C3. By definition, his beliefs are sound if and only if he never believes anything false. So if his beliefs are sound, then either:
    1. C1 is true and C2 is true and Penrose believes C4 (whatever that is)
    2. C1 is false and C2 is false and Penrose does not believe C4.
    His beliefs are not sound if either:
    1. C1 is true and C2 is false and Penrose dos not believe C4
    2. C1 is false and C2 is true and Penrose believes C4
    Since I personally don't know whether C1 or C2 is true, I can't say whether Penrose' beliefs are sound or not.
     
  22. May 1, 2017 #21
    Regarding Penrose's claim that mathematical understanding is non-algorithmic, it seems to me that he has a too-narrow view of what constitutes an algorithm. What he refers to as 'mathematical understanding' is a mix of pattern recognition and visual reasoning. Such cognitive abilities can't be boiled down to a few simple rules in the same way as Peano arithmetic. Instead, one would have to code either a very complex artificial neural network or a full simulation of an entire human brain.
     
  23. May 1, 2017 #22

    stevendaryl

    User Avatar
    Staff Emeritus
    Science Advisor

    The problem is that any program whatsoever that generates mathematical statements, whether it uses heuristic reasoning or pattern recognition or whatever, is equivalent to a formal theory such as Peano arithmetic, provided that it's deductively closed (that is, if [itex]A \rightarrow B[/itex] is generated and [itex]A[/itex] is generated, then eventually [itex]B[/itex] will be generated.
     
  24. May 1, 2017 #23
    Goodstein's theorem, one of Penrose's favorite examples, isn't provable in Peano arithmetic, but it is provable in second-order arithmetic. I guess the question then becomes whether there exists a formal system that can encompass all mathematical statements that would be of interest to humans, and whether such a system can be executed by a Turing machine.
     
  25. May 1, 2017 #24

    stevendaryl

    User Avatar
    Staff Emeritus
    Science Advisor

    Well, it can be proved in ZFC. Any so-called "recursively enumerable" theory can be generated by a Turing machine.
     
  26. May 1, 2017 #25
    Not all self-referential statements produce paradoxes, e.g the statement "this statement comprises five words" is true. That statement is not paradoxical in that it refers to a word count which is first-order verifiable. Statements or systems of statements that are self-referential or inter-referential and that predicate regarding their truth-values are paradoxical e.g if they deny or affirm their own truth-values, e.g. "this statement is false" is paradoxical in that if taken as true it renders its falsity inferable, and if taken as false, it renders its verity inferable, and "this statement is true" cannot be taken as meaningfully true, and taking it as false is contradictory.

    The statements are second-order statements which by their forms must for legitimacy make reference to first-order statements, e.g. the statement "statement A is false" must refer to a statement A that is a first-order statement. If statement A is a first-order predication, then in classical logic it is either true or false, and the truth-values of statements about its truth or falsity can be determined based on its truth or falsity. A second-order statement that refers to another second-order statement or to itself as if it were referring to a first-order statement is not well-formed.

    If we recognize such statements as making second-order predications, we can resolve the paradoxes by recognizing that the statements to which they refer are not first-order predications, so the statements, qua second-order predications, have non-legitimate referents.
     
Share this great discussion with others via Reddit, Google+, Twitter, or Facebook

Have something to add?
Draft saved Draft deleted