Kant, Gödel, Tarski - help

  • Thread starter Tsunami
  • Start date
  • Tags
    Godel
In summary: y = 0 and z = 0"would be a statement in the real closed field that would satisfy the equation f(x, y, z) = 0.
  • #1
Tsunami
91
0
Hi,

I'm sorry for being lazy again and asking for help here instead of looking stuff for myself, but I'm being lazy merely because I've tons of other stuff to read.

What I want to understand is:

1) Under what conditions does Gödel's incompleteness theorem 1 hold? (That for theories defined in a certain way, statements exists which are true but unprovable)

2a) Why does geometry not obey these conditions?
2b) What properties does geometry for Tarski to be able to prove geometry to be complete?

3a) Kant's point is that the way we experience the world, shapes our theories a priori. His idea was that math (and logic) fundamentally is a priori: there are certain basic assumptions in math and logic which are given before every conscious theory. So, math is not based on analysis of given things and experiences: rather, it starts with synthetic assumptions, something we add to the things that are given, by experiencing them.

3b) Gödel was very much influenced by Kant, and his incompleteness theorem was inspired by or at least backed up by the idea that math is a priori. It is because our theories about logic are incomplete, that it becomes necessary to say we need the way our thinking works to found this logic. This is the line of thought Gödel would use (I think). I'm not stating that he's right, but that this was his goal. So, I'm wondering, what does Gödel's idea and Tarski nuance imply for our possible knowledge? So, please, I encourage to take this to a broader level than certain regions of mathematics, or certain regions of logic, and see for what kind of knowledge in general, Tarski's and Gödel's theories hold.

There's been a lot of discussion on this forum about Gödel, and Tarski - much of these discussions have been totally messed up because people were divided in two sides, the laymen who saw all kinds of strong implications in Gödel etc. , and the mathematicians who became increasingly frustrated by this (and understandably so).

So, I'd like this thread to be about stating what implications are present, and nothing more than this. So as to increase understanding. Thank you.
 
Last edited:
Physics news on Phys.org
  • #2
In what follows, for a statement P, the notation ~P denotes "not P".


Suppose we have a language L.


In this language, we have chosen certain predicates N, P, and M. Intuitively, their meaning is:
N(x) := x is a natural number.
P(x, y, z) := x + y = z
M(x, y, z) := x * y = z


Suppose we have a set S of axioms.
Suppose further that S is "Turing-recognizable" a.k.a. "recursively denumerable".
Any finite set is Turing-recognizable... but Gödel's theorem still works for Turing-recognizable infinite sets of axioms. Essentially, we simply require that there exists an algorithm that can write down elements of S one at a time, and every element of S will eventually appear in this list.


There is a set Th(S), the "theory of S". It consists of every statement that can be proven from S.
Suppose that the axioms of natural number arithmetic (written in terms of N, P, and M) are elements of Th(S).
Suppose further that S is consistent, meaning that Th(S) doesn't contain everything. (In particular, if P is in Th(S), then ~P is not in Th(S))


Then Gödel's theorem says that Th(S) is not complete.
In particular, this means that there exists a statement P in the language L with the properties:
P is not in Th(S)
~P is not in Th(S)

In other words, P can be neither proven nor disproven from S.


If we have a model of S, then every statement in our language L is either true or false in this model. In particular, either P or ~P will be a true statement (in this model) that cannot be proven from the axioms.

-------------------------------------------------------------

The reason Gödel's theorem is not applicable to Euclidean geometry is because it is impossible to formulate the predicate "x is a natural number".


I've only seen Tarski's proof of completeness in the algebraic setting.

There's a set of axioms for a kind of thing called a "real closed field" -- these axioms are simply the first-order logic versions of the axioms of the real numbers.

In the ordinary set-theoretic approach to Euclidean geometry (using Hilbert's axioms), we can construct the real numbers, and do all of the geometry algebraically.

Tarksi gave first-order logic versions of Hilbert's axioms to define a first-order logic version of Euclidean geometry. In this formulation, one can construct a real closed field, and then do all of the geometry algebraically in terms of that.

The key thing you can do in real closed fields is "quantifier elimination". E.G. if you have the statement:

"There exists an x such that f(x, y, z) = 0"

then it is possible to rewrite this statement in terms of y and z alone. For example, the statement

"There exists an x such that x²y + z = 0"

is true if and only if "(y = 0 and z = 0) or ([itex]y \neq 0[/itex] and [itex]zy \leq 0[/itex])"
 
  • #3
Tsunami said:
3a) Kant's point is that the way we experience the world, shapes our theories a priori. His idea was that math (and logic) fundamentally is a priori: there are certain basic assumptions in math and logic which are given before every conscious theory. So, math is not based on analysis of given things and experiences: rather, it starts with synthetic assumptions, something we add to the things that are given, by experiencing them.

Yes, if you look at some of my last posts you will see some ideas I have reached on this. The fact that there is a starting point that we automatically have in our mind before we even start thinking, and this is made up of logic-math-language-space-time means that we will never be able to analyze these concepts past a certain point from within our mind. We must get outside of our mind to study these basic assumptions and I imagine this could be done in the future by directly modifying our neural networks, by direct physical manipulation of our brain, by connecting the brain to chips and by trying all kinds of combinations between these manipulations. It is rather radical, but we are trying to see the outside of the box from within the box hence we must get outside the box.

It is like a computer that wants to understand its own register structure, the gate levels how it is organized by simply manipulating its own software. The only way the computer could see the basic assumptions, the hardware from which then its software is developed, is by getting outside of itself and being like us humans that can see and manipulate the hardware and can see exactly how all the millions of transistors are connected. Hope this is helpful.
 
  • #4
Nameta,

indeed, I find myself thinking about things very similar to what you are doing. I'd love to talk about this, so if you pm me the topics you're discussing these matters, I'm bound to take a look.

However, in this thread I'd like to focus on theories of Gödel and Tarski and other forms of metamathematics.

Hurkyl,

ok that helps, though there are still some things unclear to me:

1) Are the conditions of L necessary? Or is it possible to define a more abstract language L for which the proof still holds? Am I right in saying that the set of natural numbers is Turing-recognizable? If so, can predicate N be generalized to say "x is an element of a set that's Turing-recognizable"?

2) It's still pretty unclear to me how Tarski's proof works. You construct the real numbers - this in itself is a complete theory? If not, how can using an incomplete theory work as the foundation of a complete theory? If so, why is that?
 
Last edited:
  • #5
1) Are the conditions of L necessary?
Yes. Gödel's proof depends heavily on natural number arithmetic, so you really do need to have some way of expression the notions of "natural number", "addition of natural numbers" and "multiplication of natural numbers" in the language.

It turns out that Th(N, +) is actually a complete theory.

Am I right in saying that the set of natural numbers is Turing-recognizable?
Actually, the question doesn't even make sense. :smile: You have to talk about some method of writing natural numbers as a sequence of symbols before you can ask about Turing-recognizability.

A common way to write a natural number is by giving its binary expansion. The set of binary expansions of natural numbers is Turing-recognizable. It's even better: it's Turing-decidable.

The difference is that for a Turing-recognizable language, we can write an algorithm that is capable of telling us "That string of symbols is in the language" whenever that's true.

But for a Turing-decidable language, it's also capable of telling us "That string of symbols is not in the language" whenever that is true.

(Don't forget that algorithms might run forever and not give you any output at all)


It's still pretty unclear to me how Tarski's proof works. You construct the real numbers - this in itself is a complete theory?
Yes -- the theory of real closed fields is complete.

The way I first heard it described is that if P is any statement you can make involving the logical operations, and the symbols "0", "1", "+", "*", and "<", then if it's true in one real closed field, it's true for every real closed field.

This fact, incidentally, is one of the reasons I became interested in formal logic! I thought it was a very pretty result.
 
  • #6
nameta9 said:
It is like a computer that wants to understand its own register structure, the gate levels how it is organized by simply manipulating its own software. The only way the computer could see the basic assumptions, the hardware from which then its software is developed, is by getting outside of itself and being like us humans that can see and manipulate the hardware and can see exactly how all the millions of transistors are connected. Hope this is helpful.
That's not true. It just has to be supplied with a file that contains all of that information in a format that it can read. Presumably, with sufficient tools, it could even determine that information itself.


This is actually an important concept in computability theory: it's called the recursion theorem. If I use <M> to denote a description of the Turing machine M, then we have the following:


Suppose the machine M takes as input:
(1) The description <N> of some other Turing machine N.
(2) Some other input w
and computes the value of the function f(<N>, w).

Then there exists a Turing machine T that, when given the input w, computes the value of the function f(<T>, w).


Intuitively, it says that we can always assume that our Turing machine has at its disposal a complete description of itself.
 
  • #7
You can look at:http://www.ilovephilosophy.com/phpbb/viewtopic.php?t=149592 https://www.physicsforums.com/showthread.php?t=117562The computer example was just an idea. True that a computer could end up inventing and simulating all its registers and logic gates and figure itself out. Then if we invent all kinds of oddball systems to model the mind we may end up with a correct one without knowing it. Invention then is also discovery, so invent all you want. Anyways I am not being very precise, just expressing some random thoughts.

Going back to the idea pf physically modifying our own brains and neural circuits, the real problem is that to measure a modified mind and understand what is going on inside it you still have to use your own basic logic-space-time-language. So how can you measure it and understand it without using these starting points ? And then what if the modified mind uses another set of completely different assumptions and states that it can decode our mind according to its logic ? then whose logic is correct ? Which one is the standard and which one is measured ? then all becomes relative ? Very odd - paradoxical problems that may arise in the future if scientists would really be so risky as to try these things.
 
Last edited by a moderator:

1. What are the main contributions of Kant, Gödel, and Tarski to the field of science?

Kant, Gödel, and Tarski made significant contributions to philosophy, mathematics, and logic, respectively. Kant's work focused on the foundations of metaphysics and ethics, while Gödel's incompleteness theorems revolutionized the field of mathematics. Tarski's work on logical semantics was crucial in developing the foundations of computer science and artificial intelligence.

2. How do Kant, Gödel, and Tarski's theories relate to each other?

Kant's philosophy influenced Gödel's work on mathematical foundations, specifically his belief in the existence of a priori knowledge. Gödel's incompleteness theorems also had a strong impact on Tarski's logical semantics. Tarski's work on logical semantics also drew heavily from Kant's theories of logic and language.

3. What is the significance of Gödel's incompleteness theorems?

Gödel's incompleteness theorems had a profound impact on the field of mathematics, as they proved that any mathematical system with a sufficient level of complexity will always have statements that are true but cannot be proven within the system. This challenged the idea of a complete and consistent mathematical system and opened up new avenues for research and exploration in the field.

4. How did Tarski's work on logical semantics influence computer science?

Tarski's work on logical semantics provided the foundation for the development of formal languages and automated reasoning systems in computer science. His theories on truth and logical consequence were essential in the development of programming languages and artificial intelligence.

5. What is Kant's categorical imperative and how does it relate to ethics?

Kant's categorical imperative is a moral principle that states that one should act only according to rules that they would want to become universal laws. This means that actions should be based on ethical principles that can be applied universally, rather than just personal desires or interests. This concept has had a significant impact on ethical theories and continues to be a topic of discussion in moral philosophy.

Similar threads

Replies
19
Views
2K
Replies
3
Views
2K
  • Set Theory, Logic, Probability, Statistics
Replies
24
Views
2K
Replies
72
Views
4K
  • Set Theory, Logic, Probability, Statistics
Replies
8
Views
1K
  • Programming and Computer Science
Replies
32
Views
3K
  • Set Theory, Logic, Probability, Statistics
Replies
7
Views
2K
  • Art, Music, History, and Linguistics
Replies
6
Views
4K
Replies
2
Views
2K
  • Set Theory, Logic, Probability, Statistics
Replies
3
Views
2K
Back
Top