I am confused, since some claims about the first Godel incompleteness theorem and real numbers seem mutually contradictory. In essence, from one point of view it seems that the Godel theorem applies to real numbers, while from another point of view it seems that the Godel theorem does not apply to real numbers. 1. point of view: The Godel theorem applies to arithmetic of natural numbers, as well as to any axiomatic system that contains the axioms of natural numbers plus some additional axioms. Real numbers can be axiomatized in that way: First one defines rational numbers as ordered pairs of natural numbers, and then one defines real numbers in terms of rational numbers, either in terms of Cauchy sequences or Dedekind cuts. Therefore the Godel theorem must apply to real numbers. 2. point of view: But real numbers can also be axiomatized in a different way, without reference to natural numbers. As shown by Tarski http://en.wikipedia.org/wiki/Real_closed_field#Model_theory:_decidability_and_quantifier_elimination in such an axiomatization of real numbers it is possible to eliminate all quantifiers, which implies that the theory is decidable: any claim can be either proved or disproved. The contradiction between the two points of view: According to the Godel theorem and 1. point of view (and assuming consistency), in the theory of real numbers there is a claim ("This claim cannot be proved.") which can neither be proved nor disproved. According to the 2. point of view, there is no such a claim. Perhaps this might mean that two approaches to real numbers are not equivalent, but there is a claim that the two approaches are equivalent. To quote from http://en.wikipedia.org/wiki/Real_number : "The currently standard axiomatic definition is that real numbers form the unique Archimedean complete totally ordered field (R ; + ; · ; <), up to an isomorphism, whereas popular constructive definitions of real numbers include declaring them as equivalence classes of Cauchy sequences of rational numbers, Dedekind cuts, or certain infinite "decimal representations", together with precise interpretations for the arithmetic operations and the order relation. These definitions are equivalent in the realm of classical mathematics." So which point of view (if any) is right, and what exactly is wrong with the wrong one? I am sure I misunderstood something, but I don't see what exactly would that be.