Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

Kant, Gödel, Tarski - help

  1. May 7, 2006 #1

    I'm sorry for being lazy again and asking for help here instead of looking stuff for myself, but I'm being lazy merely because I've tons of other stuff to read.

    What I want to understand is:

    1) Under what conditions does Gödel's incompleteness theorem 1 hold? (That for theories defined in a certain way, statements exists which are true but unprovable)

    2a) Why does geometry not obey these conditions?
    2b) What properties does geometry for Tarski to be able to prove geometry to be complete?

    3a) Kant's point is that the way we experience the world, shapes our theories a priori. His idea was that math (and logic) fundamentally is a priori: there are certain basic assumptions in math and logic which are given before every conscious theory. So, math is not based on analysis of given things and experiences: rather, it starts with synthetic assumptions, something we add to the things that are given, by experiencing them.

    3b) Gödel was very much influenced by Kant, and his incompleteness theorem was inspired by or at least backed up by the idea that math is a priori. It is because our theories about logic are incomplete, that it becomes necessary to say we need the way our thinking works to found this logic. This is the line of thought Gödel would use (I think). I'm not stating that he's right, but that this was his goal. So, I'm wondering, what does Gödel's idea and Tarski nuance imply for our possible knowledge? So, please, I encourage to take this to a broader level than certain regions of mathematics, or certain regions of logic, and see for what kind of knowledge in general, Tarski's and Gödel's theories hold.

    There's been a lot of discussion on this forum about Gödel, and Tarski - much of these discussions have been totally messed up because people were divided in two sides, the laymen who saw all kinds of strong implications in Gödel etc. , and the mathematicians who became increasingly frustrated by this (and understandably so).

    So, I'd like this thread to be about stating what implications are present, and nothing more than this. So as to increase understanding. Thank you.
    Last edited: May 7, 2006
  2. jcsd
  3. May 7, 2006 #2


    User Avatar
    Staff Emeritus
    Science Advisor
    Gold Member

    In what follows, for a statement P, the notation ~P denotes "not P".

    Suppose we have a language L.

    In this language, we have chosen certain predicates N, P, and M. Intuitively, their meaning is:
    N(x) := x is a natural number.
    P(x, y, z) := x + y = z
    M(x, y, z) := x * y = z

    Suppose we have a set S of axioms.
    Suppose further that S is "Turing-recognizable" a.k.a. "recursively denumerable".
    Any finite set is Turing-recognizable... but Gödel's theorem still works for Turing-recognizable infinite sets of axioms. Essentially, we simply require that there exists an algorithm that can write down elements of S one at a time, and every element of S will eventually appear in this list.

    There is a set Th(S), the "theory of S". It consists of every statement that can be proven from S.
    Suppose that the axioms of natural number arithmetic (written in terms of N, P, and M) are elements of Th(S).
    Suppose further that S is consistent, meaning that Th(S) doesn't contain everything. (In particular, if P is in Th(S), then ~P is not in Th(S))

    Then Gödel's theorem says that Th(S) is not complete.
    In particular, this means that there exists a statement P in the language L with the properties:
    P is not in Th(S)
    ~P is not in Th(S)

    In other words, P can be neither proven nor disproven from S.

    If we have a model of S, then every statement in our language L is either true or false in this model. In particular, either P or ~P will be a true statement (in this model) that cannot be proven from the axioms.


    The reason Gödel's theorem is not applicable to Euclidean geometry is because it is impossible to formulate the predicate "x is a natural number".

    I've only seen Tarski's proof of completeness in the algebraic setting.

    There's a set of axioms for a kind of thing called a "real closed field" -- these axioms are simply the first-order logic versions of the axioms of the real numbers.

    In the ordinary set-theoretic approach to Euclidean geometry (using Hilbert's axioms), we can construct the real numbers, and do all of the geometry algebraically.

    Tarksi gave first-order logic versions of Hilbert's axioms to define a first-order logic version of Euclidean geometry. In this formulation, one can construct a real closed field, and then do all of the geometry algebraically in terms of that.

    The key thing you can do in real closed fields is "quantifier elimination". E.G. if you have the statement:

    "There exists an x such that f(x, y, z) = 0"

    then it is possible to rewrite this statement in terms of y and z alone. For example, the statement

    "There exists an x such that x²y + z = 0"

    is true if and only if "(y = 0 and z = 0) or ([itex]y \neq 0[/itex] and [itex]zy \leq 0[/itex])"
  4. May 7, 2006 #3
    Yes, if you look at some of my last posts you will see some ideas I have reached on this. The fact that there is a starting point that we automatically have in our mind before we even start thinking, and this is made up of logic-math-language-space-time means that we will never be able to analyze these concepts past a certain point from within our mind. We must get outside of our mind to study these basic assumptions and I imagine this could be done in the future by directly modifying our neural networks, by direct physical manipulation of our brain, by connecting the brain to chips and by trying all kinds of combinations between these manipulations. It is rather radical, but we are trying to see the outside of the box from within the box hence we must get outside the box.

    It is like a computer that wants to understand its own register structure, the gate levels how it is organized by simply manipulating its own software. The only way the computer could see the basic assumptions, the hardware from which then its software is developed, is by getting outside of itself and being like us humans that can see and manipulate the hardware and can see exactly how all the millions of transistors are connected. Hope this is helpful.
  5. May 7, 2006 #4

    indeed, I find myself thinking about things very similar to what you are doing. I'd love to talk about this, so if you pm me the topics you're discussing these matters, I'm bound to take a look.

    However, in this thread I'd like to focus on theories of Gödel and Tarski and other forms of metamathematics.


    ok that helps, though there are still some things unclear to me:

    1) Are the conditions of L necessary? Or is it possible to define a more abstract language L for which the proof still holds? Am I right in saying that the set of natural numbers is Turing-recognizable? If so, can predicate N be generalized to say "x is an element of a set that's Turing-recognizable"?

    2) It's still pretty unclear to me how Tarski's proof works. You construct the real numbers - this in itself is a complete theory? If not, how can using an incomplete theory work as the foundation of a complete theory? If so, why is that?
    Last edited: May 7, 2006
  6. May 7, 2006 #5


    User Avatar
    Staff Emeritus
    Science Advisor
    Gold Member

    Yes. Gödel's proof depends heavily on natural number arithmetic, so you really do need to have some way of expression the notions of "natural number", "addition of natural numbers" and "multiplication of natural numbers" in the language.

    It turns out that Th(N, +) is actually a complete theory.

    Actually, the question doesn't even make sense. :smile: You have to talk about some method of writing natural numbers as a sequence of symbols before you can ask about Turing-recognizability.

    A common way to write a natural number is by giving its binary expansion. The set of binary expansions of natural numbers is Turing-recognizable. It's even better: it's Turing-decidable.

    The difference is that for a Turing-recognizable language, we can write an algorithm that is capable of telling us "That string of symbols is in the language" whenever that's true.

    But for a Turing-decidable language, it's also capable of telling us "That string of symbols is not in the language" whenever that is true.

    (Don't forget that algorithms might run forever and not give you any output at all)

    Yes -- the theory of real closed fields is complete.

    The way I first heard it described is that if P is any statement you can make involving the logical operations, and the symbols "0", "1", "+", "*", and "<", then if it's true in one real closed field, it's true for every real closed field.

    This fact, incidentally, is one of the reasons I became interested in formal logic! I thought it was a very pretty result.
  7. May 7, 2006 #6


    User Avatar
    Staff Emeritus
    Science Advisor
    Gold Member

    That's not true. It just has to be supplied with a file that contains all of that information in a format that it can read. Presumably, with sufficient tools, it could even determine that information itself.

    This is actually an important concept in computability theory: it's called the recursion theorem. If I use <M> to denote a description of the Turing machine M, then we have the following:

    Suppose the machine M takes as input:
    (1) The description <N> of some other Turing machine N.
    (2) Some other input w
    and computes the value of the function f(<N>, w).

    Then there exists a Turing machine T that, when given the input w, computes the value of the function f(<T>, w).

    Intuitively, it says that we can always assume that our Turing machine has at its disposal a complete description of itself.
  8. May 8, 2006 #7
    You can look at:



    The computer example was just an idea. True that a computer could end up inventing and simulating all its registers and logic gates and figure itself out. Then if we invent all kinds of oddball systems to model the mind we may end up with a correct one without knowing it. Invention then is also discovery, so invent all you want. Anyways I am not being very precise, just expressing some random thoughts.

    Going back to the idea pf physically modifying our own brains and neural circuits, the real problem is that to measure a modified mind and understand what is going on inside it you still have to use your own basic logic-space-time-language. So how can you measure it and understand it without using these starting points ? And then what if the modified mind uses another set of completely different assumptions and states that it can decode our mind according to its logic ? then whose logic is correct ? Which one is the standard and which one is measured ? then all becomes relative ? Very odd - paradoxical problems that may arise in the future if scientists would really be so risky as to try these things.
    Last edited: May 8, 2006
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook

Have something to add?

Similar Discussions: Kant, Gödel, Tarski - help
  1. Immanuel Kant (Replies: 2)

  2. Kant (Replies: 4)