Everyone must have wondered about this question from the very first moment they hear about the incompleteness theorem. It is what scared everyone back in 1931 after all, no? ('If he's right then...' -they must have thought in Cambridge and Göttingen at the time. And indeed such comments have been documented and are well known). The question is important. For if it is indeed very common, and you have no way of finding out whether your problem lies within the unprovable ones or not, you are in trouble. Without knowing, you may be trying to know something unknowable. The voyage to Ithaka is always interesting and wonderful and all, but you aim to get there eventually right? 'That's what you're destined for'.. But what if Ithaka doesn't exist? And so we know since 1931 that sometimes it doesn't. But what if Ithakas not uncommonly do not exist? Could there be any quantitative approach regarding this problem? Now of course, history is against us. 'It can't be that common!' one says. Fermat's last theorem, the four colour problem and many other intricate problems have been dealt with successfully within the 20th century. 'Surely, if we believed that Gödel's incompleteness result prevails in that level of complexity we could just not work on these subjects and they would still remain conjectures. But we didn't and here we are with many fruitful results'. So surely one has to try. But Riemann's hypothesis, most famously, together with a number of other problems in mathematics (and theoretical physics which, by necessity, is highly mathematical and increasingly so!), persistently remain mere hypotheses. Could it be that they exist in a higher level of complexity where the incompleteness phenomenon becomes more common? Can we approach this question somehow or are we doomed to just try our luck time and again hoping we escape this mathematical black hole? The importance of this issue was very clear to me already, when I picked up a book from the library last week called 'conversations with a mathematician' (the mathematician being G.Chaitin) but i had no idea that the field 'algorithmic information theory' -or perhaps 'information theory' itself(?)- deals with questions like these. The book (a collection of lectures) is interesting and stimulating, for the basic idea is beautiful: It deals with viewing the constructions of mathematical systems as systems of information input (axioms and rules of logic) and information output (theorems). From this point of view the incompleteness theorem is saying something like this: The whole mathematical realm of possible propositions is an almost infinite amount of information (output) towards which you cannot arrive from the finite amount of axioms and calculating methods you provide (input). The author claims the theory has a very close analogy to a computer program and uses the calculation concept of a Turing machine. Needless to say that in order to acquire a personal opinion on the matter one needs to work out the theory itself and get a quantitative knowledge of the issues involved. For example how does one define 'information' in this sense and how 'complexity'? I have no idea, perhaps other members could enlighten us on this. For the record, the question "How common is the incompleteness phenomenon?" is in fact answered by the author claiming that it is pervasive and common in mathematics - the higher the complexity as defined in alg.inf.theory. Here follows part of one of the lectures. You can find all of it here: http://www.cs.auckland.ac.nz/~chaitin/vienna.html "Let me make my question more explicit. There are many problems in the theory of numbers that are very simple to state. Are there an infinity of twin primes, primes that are two odd numbers separated by one even number? [A prime is a whole number with no exact divisors except 1 and itself. E.g., 7 is prime, and 9 = 3 × 3 is not.] That question goes back a long way. A question which goes back to the ancient Greeks is, are there infinitely many even perfect numbers, and are there any odd perfect numbers? [A perfect number is the sum of all its proper divisors, e.g., 6 = 1 + 2 + 3 is perfect.] Is it possible that the reason that these results have not been proven is because they are unprovable from the usual axioms? Is the significance of Gödel's incompleteness theorem that these results, which no mathematician has been able to prove, but which they believe in, should be taken as new axioms? In other words, how pervasive, how common, is the incompleteness phenomenon? If I have a mathematical conjecture or hypothesis, and I work for a week unsuccessfully trying to prove it, I certainly do not have the right to say, ``Well obviously, invoking Gödel's incompleteness theorem, it's not my fault: Normal mathematical reasoning cannot prove this---we must add it as a new axiom!'' This extreme clearly is not justified. When Gödel produced his great work, many important mathematicians like Hermann Weyl and John von Neumann took it as a personal blow. Their faith in mathematical reasoning was severely questioned. Hermann Weyl said it had a negative effect on his enthusiasm for doing mathematics. Of course it takes enormous enthusiasm to do good research, because it's so difficult. With time, however, people have gone to the other extreme, saying that in practice incompleteness has nothing to do with normal, every-day mathematics. So I think it's a very serious question to ask, ``How common is incompleteness and unprovability?'' Is it a very bizarre pathological case, or is it pervasive and quite common? Because if it is, perhaps we should be doing mathematics quite differently." Best ps. Ithaka, as some of you may have noticed, comes from the rather known poem by Cavafy (http://www.cavafy.com/poems/content.asp?id=74&cat=1). He wasn't talking about mathematics of course (it was before 1931 so this is certain!). A quite disappointing conclusion if one relates the meaning with mathematical thinking.. but a beautiful journey nevertheless.