# Non Computable Functions And Godel's Theorem

Mentor
Hi All

I normally post on the QM forum but also have done quite a bit of programming and did study computer science at uni. I have been reading a book about Ramanujan and interestingly he was also good friends with Bertrand Russell. You normally associate Russell with philosophy but in fact at Cambridge was 7th wrangler (Hardy was 4th and Littlewood 1st). The only thing guaranteed to get Russell away from philosophy was to discuss math which he did with relish with Ramanujan. He was finishing his second volume of principia mathematica and it got me thinking could Ramanujan have come up with Godel's theorem.

So I started looking into it. I have seen proofs before - but could they be simple so as to be clear and easily explained. After thinking and reading a bit I came up with one I will post - no claim of originality - a bit from here - a bit from there. BUT the interesting thing was if it was to be simple you really needed computing concepts which would not have been available at that time. I was sort of gobsmacked - it really requires our modern era to see it clearly and simply.

Anyway here is my computer science proof. Let f(x) be a function from the natural numbers to either 0 or 1. I will call them f functions. Obviously anyone can write computer programs for at least a few such functions. We will call such functions computable ie those that you can write programs for. The are at least a countable infinity of them because if x is any integer let f(x) = 1 when x is some integer n and 0 otherwise eg the for n=6, function f(x) is 0 except when its 6, then its 1. Now are all computable f functions a countable infinity? Well all computer programs are countable because they use a finite number of characters and are of finite length. So computable functions are a countable infinity. Assume this countable infinity is ordered in correspondence to the natural numbers 1, 2, 3 ....., n, .........

Ok are all f functions computable? As you will see that is the real key to Godel. Let g(i) = 1 - fi(i) where fi is the ith function in the ordering above. Now suppose g is computable. If so we can find it in the list of computable functions - say it is number k. So g(k) = 1 - fk(k). If fk which is the same as gk is 1 or 0 it says gk is the opposite - contradiction. So we have come up with a very interesting computer science result - there are non computable functions. Strange but true.

But wait for it - we have proved more - in fact Godel's theorem. Here is why. Let g(x) = 1 or g(x) = 0 be called a g statement. Obviously a g statement, in a consistent system, should be true or false. So lets assume all g statements are true or false. Ok suppose we have some logical system - you know those formal systems that have symbols like =, ∀, Γ and so forth. Lets input some natural number x. Suppose we have a program that generates all possible statements in that formal system. We check if its a proof of the g statements g(x) = 1 and then g(x) =0. Suppose the answer is yes on one of them. We output the g(x) = 0 or 1 depending on what has been proven true ie if the statement is g(12) = 1 we output g(12) = 1. If not a true g statement proof we check if its false. If false we output the opposite because we have assumed its consistent ie if it proved g(12) = 1 is false we output g(12) = 0. We stop when something is output.

See the problem - we cant compute g - but the above is a way to do it for any x if the assumption is true ie g statements are either true or false - that is if it terminates. Contradiction. The premise is false - there are undecidable statements in a system powerful enough to contain arithmetic. Suppose it never terminates - then you have the same issue - you cant decide if its true or not.

Interesting isn't it. This was a very famous theorem. But computing concepts makes it fairly easy to understand and along the way we have seen you have functions that can't be computed and that leads to Godel's famous result.

Or maybe I have goofed         .

Thanks
Bill

Last edited:

TeethWhitener
Gold Member
I worry that maybe you're conflating completeness and decidability. They are related but not the same:
https://philosophy.stackexchange.co...-first-order-logic-complete-but-not-decidable
Essentially: decidability by itself doesn't imply completeness (nor vice versa): https://en.wikipedia.org/wiki/Decidability_(logic)#Relationship_with_completeness

Let f(x) be a function from the natural numbers to either 0 or 1.
Just a side note: For any set ##A##, the set of all functions ##\mathcal{F}## from ##A\rightarrow \{0,1\}## is isomorphic to the power set of ##A## (This is straightforward to see: For each unique ##f_n\in\mathcal{F}##, the preimage of ##\{1\}## under ##f_n## will be a unique subset of ##A##, so ##\mathcal{F}## has a natural bijection with ##\mathcal{P}(A)##). The rest of the section on non-computable functions is essentially Cantor's diagonal argument (albeit in binary). I don't know how abreast Ramanujan was of set theory at the time; it certainly wasn't the type of stuff he's known for.

• bhobba
Interesting isn't it. This was a very famous theorem. But computing concepts makes it fairly easy to understand and along the way we have seen you have functions that can't be computed and that leads to Godel's famous result.
Yes, what you have written is more or less right I think. I don't know fully how the details would be worked out (mostly due to not understanding fol well), but the idea is correct. Also see post#12 and post#14 in this thread:

Kleene thought of the idea of diagonalisation total recursive functions to create (total) non-recursive functions very early. But after that, the halting problem provided a more natural example of a (total) non-recursive function.

But there are two points worth pointing out on why this kind of proof works:
(1) The first is I think that the underlying symbolic manipulation that is required to work out all the (relevant) consequences/theorems following from the given axioms+inference-rules should be computable.
(2) The other is that, roughly, we should be able to recognise the string corresponding to our problem when we see it.

For example, suppose we were trying to use halting problem to prove first incompleteness theorem. The basic idea is that we will start listing out all the possible theorems (specifically a certain "theorem generating" part of our full program will be doing this).
However, the program still needs to be able to recognise the right form of a statement that corresponds to halting of a program. But this can be written as something like:
∃x R(x)
Here R is a primitive recursive predicate (meaning it only returns 0 or 1 corresponding to false and true). As it turns out, fol (given the tools provided in incompleteness theorems) is powerful enough to describe a statement like this (which involves a primitive recursive predicate) ..... and much more (and I think the statements you are describing in OP can also be specified in fol). Generally speaking R(x) can be thought of in the following way:
R(x) ≡ step(a,b,x)
Here "a" and "b" are constants and "step" is also a primitive recursive predicate (but in 3 input variables instead of 1).

Basically the step function can be thought of as:
step(program-index, input, time)
Here program-index is the index/godel-number of some program (in a fixed computational model). Input means the number given to the program as input. Time means the number of steps. The "step" function simply checks whether the given program (when given a certain input) has halted till the given number of steps (and returns 0 or 1 depending on it).

So, in summary, the given program (which presumable solves halting problem) simply keeps generating all the theorems (following from axioms+rules of inference). At the same time, it keeps checking all theorems of the form "∃x R(x)"(and "~∃x R(x)"). And whenever it finds one, it checks whether this matches the "program-index" and "input" given to it (both given as inputs). And if the matching is complete, it returns its answer based on what it found. But due to halting problem, we know that this given program that we have written (based on fol) can't be total .... and hence a version of the first incompleteness theorem follows.
Basically if we use an uncomputable other than halting problem only the recognition part in the last paragraph will change (other reasoning will stay similar).

Edit:
Also perhaps this is worth mentioning (in case it wasn't obvious), the truth of statement (where a and b are constants):
∃x step(a,b,x)
basically gives us the value of halt(a,b) ..... where halt is the usual halting predicate (with two input variables).

As far as I understand (if I am not missing something), a difference between a proof of this kind and the usual proof of first incompleteness theorem is that this kind of proof (for example) just tells us that there is at least one statement of this kind that can't be proved/disproved in the given (sound) systems S. In case we use halting problem, for example it just tells us that there is at least some statements of the form "∃x R(x)" which S won't be able to prove/disprove. However, it doesn't give a specific statement which S can't prove/disprove. The original proof gives a specific (true) statement (assuming S is sound) which can't be proved by S.

Last edited:
• bhobba
Mentor
Essentially: decidability by itself doesn't imply completeness (nor vice versa):.

Excellent point. I need to think about it a bit more - off the top of my head I think you are correct - I have proven there are statements you cant decide - but completeness is a bit different.

Most certainly I have proven decidability ie there are statements in the system that can not be decided. The program I alluded to, to proof check g statements, must not terminate for at least one x - which means strictly speaking saying g statements are either true or false for all x cant be decided.

Completeness says that if a formula is logically valid then there is a finite deduction (a formal proof) of the formula. Well by definition for all x g(x) = 1 or g(x) =0. But I have shown since g is not computable you cant show this in a finite number of steps so is not complete.

I think I may have shown both.

Thanks
Bill

Last edited:
stevendaryl
Staff Emeritus
I worry that maybe you're conflating completeness and decidability.

I don't think he was. He was arguing that
1. There is an uncomputable function ##g(x)## that takes on values ##0## or ##1##.
2. If the theory were complete, then you could compute ##g(x)## by searching for a proof of either ##g(x) = 0## or ##g(x) = 1##
So uncomputability implies incompleteness.

Actually, there is a missing step in the proof, which is the argument that the statements ##g(x) = 0## and ##g(x) = 1## are actually formulas expressible in your theory. He gave an informal definition of ##g(x)##, so it really would turn on whether the informal description can be formalized (in the language of arithmetic, say).

TeethWhitener
Gold Member
But I have shown since g is not computable you cant show this in a finite number of steps so is not complete.
Is this true? Certainly the general ##g## isn't computable, but individual cases of ##g(x)=a## might be provable (cf. busy beaver function).
So uncomputability implies incompleteness.
Is this true? (Syntactic) completeness is "validity implies a proof exists" (if ## \models \varphi## then ## \vdash \varphi##), computability (decidability) is "given a ##\varphi##, there exists a finite time algorithm to check its validity." I'm not sure how we can draw the inference from one to the other via @bhobba 's argument. Maybe there's something I'm missing.

• bhobba
Mentor
Is this true? Certainly the general ##g## isn't computable, but individual cases of ##g(x)=a## might be provable (cf. busy beaver function).

https://en.wikipedia.org/wiki/Gödel's_incompleteness_theorems
First Incompleteness Theorem: "Any consistent formal system F within which a certain amount of elementary arithmetic can be carried out is incomplete; i.e., there are statements of the language of F which can neither be proved nor disproved in F."

I have shown that the statement ∀x g(x) = 0 or g(x) = 1, which is true by construction, but as the above says it doesn't have to be a true statement, it just has to be shown it cant be disproved or proved in the system, which I think I have shown.

I think one of the issues here is there may be some differences in the precise statement of the incompleteness theorem depending on the source. The above is from Wikipedia, which IMHO is actually a negative for me - they are far from always correct.

Other sources say the same thing so Wikipedia may be correct in this case.

Thanks
Bill

Last edited:
Mentor
Actually, there is a missing step in the proof, which is the argument that the statements ##g(x) = 0## and ##g(x) = 1## are actually formulas expressible in your theory.

Sure - its not totally rigorous - which is either a plus in that it makes it easier to understand or a minus in that I haven't fully shown what I set out to do. Anybody can look up on Wikipedia the outline of a rigorous proof or a simple google search will give tons eg:
https://www.logicmatters.net/resources/pdfs/PartIII.pdf

This is tricky stuff that can confuse even some quite intelligent people. It has been noted my argument is a variant of Cantors diagonal augment - but some do not even accept it. Jstor's editor wrote an interesting article about it:
http://www.logic.univie.ac.at/~ykhomski/ST2013/Hodges.pdf

Ease of exposition giveth and taketh       hanks
Bill

Last edited:
Mentor
Maybe there's something I'm missing.

I don't think you are missing anything - you are doing the correct thing and subjecting my proof to scrutiny - which of course it should be subjected to.

Thanks
Bill

stevendaryl
Staff Emeritus
Is this true? (Syntactic) completeness is "validity implies a proof exists"

That's not the notion of "incompleteness" from Godel's Incompleteness Theorem. That's the notion from Godel's Completeness Theorem. There is completeness of a logic, which is that every valid statement (true under all interpretations) is provable. But the sense in which Peano Arithmetic or ZFC is incomplete is different: It's incomplete if there are meaningful statements in the language of arithmetic that are neither provable nor refutable.

A definable, uncomputable function is proof of incompleteness in the latter sense.
From Wikipedia https://en.wikipedia.org/wiki/Gödel's_incompleteness_theorems

A set of axioms is (syntactically, or negation-) complete if, for any statement in the axioms' language, that statement or its negation is provable from the axioms (Smith 2007, p. 24). This is the notion relevant for Gödel's first Incompleteness theorem. It is not to be confused with semantic completeness, which means that the set of axioms proves all the semantic tautologies of the given language.

...
Anyway here is my computer science proof. Let f(x) be a function from the natural numbers to either 0 or 1. I will call them f functions.
...
Let g(i) = 1 - fi(i) where fi is the ith function in the ordering above.
Now suppose g is computable. If so we can find it in the list of computable functions - say it is number k. So g(k) = 1 - fk(k). If fk which is the same as gk is 1 or 0 it says gk is the opposite - contradiction. So we have come up with a very interesting computer science result - there are non computable functions. Strange but true.
...

I haven't read all of the replies, but it seams to me that you have also assumed ##g: \mathbb{N} \rightarrow \{0,1\}##, which isn't true is it? Haven't you just shown that ##g## isn't defined over all of ##\mathbb{N}##, i.e. there are non-##f## functions, rather than the existence of a non-computable ##f##-function.

stevendaryl
Staff Emeritus
I haven't read all of the replies, but it seams to me that you have also assumed ##g: \mathbb{N} \rightarrow \{0,1\}##, which isn't true is it? Haven't you just shown that ##g## isn't defined over all of ##\mathbb{N}##, i.e. there are non-##f## functions, rather than the existence of a non-computable ##f##-function.

By definition, ##g(n) = 1 - f_n(n)##, where ##f_n(x)## is the ##n^{th}## total computable function that always returns either 0 or 1. I don't see how ##g(n)## could fail to be a function from ##N## into ##\{0,1\}##.

It's clear that ##g## cannot be equal to ##f_n##, for any value of ##n##. Since ##f_n## enumerates all computable functions from ##N## into ##\{0, 1\}##, it follows that ##g## cannot be computable.

By definition, ##g(n) = 1 - f_n(n)##, where ##f_n(x)## is the ##n^{th}## total computable function that always returns either 0 or 1. I don't see how ##g(n)## could fail to be a function from ##N## into ##\{0,1\}##.

It's clear that ##g## cannot be equal to ##f_n##, for any value of ##n##. Since ##f_n## enumerates all computable functions from ##N## into ##\{0, 1\}##, it follows that ##g## cannot be computable.
Can't you also argue that ##g## must be computable since ##f_n## is computable for all ##n##. The algorithm for ##g(k)## just first computes ##f_k(k)## and subtracts it from 1. Edit: Well I guess you need to find the ##k^{th}## computable function as well...

Last edited:
stevendaryl
Staff Emeritus
Can't you also argue that ##g## must be computable since ##f_n## is computable for all ##n##.

What's not computable is finding ##f_n## given ##n##.

Let's suppose that you have some way of generating all possible computer programs in some kind of order. But a computer program can get into an infinite loop. So out of that list of all computer programs, you have to strike off the ones that don't halt, for some particular input. But there's no way to know which ones halt, and which ones don't.

• bhobba
To Jarvis its just Cantors diagonal argument in another guise:
https://en.wikipedia.org/wiki/Cantor's_diagonal_argument

Thanks
Bill

It's interesting to me if you think about it that way, because of what it seams to also imply in this context.

You could start by supposing you have an enumeration of the f-functions in the form of their mappings ( e.g. 010000... is the function that maps 1 to 0, and everything else to 0), and then construct an f-function not on the list, proving they are uncountable. Then use pigeon hole principle to show there are uncomputable functions.

But starting with an enumeration of the computable functions is interesting. Of course you can use the same procedure to construct an f-function not on the list. But this proves more than just that there are uncomputable functions. It also shows that you cannot order the computable functions in any systematic way such that knowing ##k##, allows you to know ##f_k(k)##. This seams clear if you are enumerating them in correspondence with the sorted programs that compute them, but you could also try to imagine other systematic ways of ordering the functions themselves based on their mappings. It seams sort of intuitive that this could be the case, because the number of possible mappings grows larger than the number of functions, so maybe you are given too small an amount of information to deduce the larger amount of information. On the other hand, you really only need the system to give away that one new bit per element. For example, there should be no way to list the computable functions in the form, such that the ##x_i##'s are each computable as functions of ##i##.

##
\{0,1\}^0x_0\{0,1\}^{\infty}\\
\{0,1\}^1x_1\{0,1\}^{\infty}\\
\{0,1\}^2x_2\{0,1\}^{\infty}\\
\{0,1\}^3x_3\{0,1\}^{\infty}\\
...
##

Mentor
It also shows that you cannot order the computable functions in any systematic way

Of course you can eg order their computer programs alphabetically. To be specific I am invoking the well ordering theorem.

Thanks
Bill

Last edited:
Of course you can eg order their computer programs alphabetically.

Thanks
Bill

such that knowing a function ##f##'s position ##k## gives away ##f(k)##

Mentor
such that knowing a function ##f##'s position ##k## gives away ##f(k)##

The well ordering theorem guarantees you can which is equivalent to the axiom choice which most mathematicians accept. If you are one of the few that do not - then my proof falls over. Of course so do a myriad of other proofs that depend on it such that every vector space has a basis and we are in deep do do, which is why most accept it. Still you can chose to be in deep do do if you want.

I remember my analysis professor eyes rolled back and said of a proof he did in a lecture - to be fully rigorous you will need the axiom of choice and going down that path will not lead anywhere fruitful except in a course about the foundations of math - which it wasn't. If you want to doubt that I suggest you start a thread in the set theory and logic sub-forum. Its not really my bag - but we likely have some there who can discuss it with you.

Thanks
Bill

stevendaryl
Staff Emeritus
The well ordering theorem guarantees you can which is equivalent to the axiom choice which most mathematicians accept. If you are one of the few that do not - then my proof falls over.

The issue is not whether the computable functions can be ordered---of course they can. The issue is that the ordering is not itself computable. That is, the function ##\mathcal{C}(n)## which given ##n## returns the computer program for the ##n^{th}## computable function is not a computable function. If it were, then the function ##g(n) = 1 - f_n(n)## would be computable.

That's how I interpreted the line

...such that knowing a function ##f##'s position ##k## gives away ##f(k)##

Mentor
That's how I interpreted the line

Sure a program to sort them alphabetically will never terminate since their number is infinitely countable it is not computeable. But its irrelevant to my argument. You can put them in 1-1 correspondence with the natural numbers by the well ordering theorem.

Thanks
Bill

stevendaryl
Staff Emeritus
Sure a program to sort them alphabetically will never terminate since their number is infinitely countable it is not computeable. But its irrelevant to my argument. You can put them in 1-1 correspondence with the natural numbers by the well ordering theorem.

But the issue was Jarvis saying

Can't you also argue that ##g## must be computable, since ##f_n## is computable for all ##n##?

He's not disputing that the computable functions can be ordered, he's asking why that doesn't imply that your ##g## is computable. So the well-ordering theorem does not seem relevant to that issue.

Mentor
He's not disputing that the computable functions can be ordered, he's asking why that doesn't imply that your ##g## is computable. So the well-ordering theorem does not seem relevant to that issue.

Because it leads to a contradiction. If it was computeable, there would be a k that has opposite values for the function associated with it.

Thanks
Bill

stevendaryl
Staff Emeritus
Because it leads to a contradiction. If it was computeable, there would be a k that has opposite values for the function associated with it.

Yes, I know that. I'm saying that that the well-ordering theorem is not relevant to that conclusion.

TeethWhitener