Non Computable Functions And Godel's Theorem

In summary, the author discusses how Ramanujan and Bertrand Russell were friends and how Ramanujan may have come up with Godel's theorem. He presents a computer science proof that there are non computable functions and goes on to prove Godel's theorem.
  • #1
10,774
3,634
Hi All

I normally post on the QM forum but also have done quite a bit of programming and did study computer science at uni. I have been reading a book about Ramanujan and interestingly he was also good friends with Bertrand Russell. You normally associate Russell with philosophy but in fact at Cambridge was 7th wrangler (Hardy was 4th and Littlewood 1st). The only thing guaranteed to get Russell away from philosophy was to discuss math which he did with relish with Ramanujan. He was finishing his second volume of principia mathematica and it got me thinking could Ramanujan have come up with Godel's theorem.

So I started looking into it. I have seen proofs before - but could they be simple so as to be clear and easily explained. After thinking and reading a bit I came up with one I will post - no claim of originality - a bit from here - a bit from there. BUT the interesting thing was if it was to be simple you really needed computing concepts which would not have been available at that time. I was sort of gobsmacked - it really requires our modern era to see it clearly and simply.

Anyway here is my computer science proof. Let f(x) be a function from the natural numbers to either 0 or 1. I will call them f functions. Obviously anyone can write computer programs for at least a few such functions. We will call such functions computable ie those that you can write programs for. The are at least a countable infinity of them because if x is any integer let f(x) = 1 when x is some integer n and 0 otherwise eg the for n=6, function f(x) is 0 except when its 6, then its 1. Now are all computable f functions a countable infinity? Well all computer programs are countable because they use a finite number of characters and are of finite length. So computable functions are a countable infinity. Assume this countable infinity is ordered in correspondence to the natural numbers 1, 2, 3 ..., n, ...

Ok are all f functions computable? As you will see that is the real key to Godel. Let g(i) = 1 - fi(i) where fi is the ith function in the ordering above. Now suppose g is computable. If so we can find it in the list of computable functions - say it is number k. So g(k) = 1 - fk(k). If fk which is the same as gk is 1 or 0 it says gk is the opposite - contradiction. So we have come up with a very interesting computer science result - there are non computable functions. Strange but true.

But wait for it - we have proved more - in fact Godel's theorem. Here is why. Let g(x) = 1 or g(x) = 0 be called a g statement. Obviously a g statement, in a consistent system, should be true or false. So let's assume all g statements are true or false. Ok suppose we have some logical system - you know those formal systems that have symbols like =, ∀, Γ and so forth. Let's input some natural number x. Suppose we have a program that generates all possible statements in that formal system. We check if its a proof of the g statements g(x) = 1 and then g(x) =0. Suppose the answer is yes on one of them. We output the g(x) = 0 or 1 depending on what has been proven true ie if the statement is g(12) = 1 we output g(12) = 1. If not a true g statement proof we check if its false. If false we output the opposite because we have assumed its consistent ie if it proved g(12) = 1 is false we output g(12) = 0. We stop when something is output.

See the problem - we can't compute g - but the above is a way to do it for any x if the assumption is true ie g statements are either true or false - that is if it terminates. Contradiction. The premise is false - there are undecidable statements in a system powerful enough to contain arithmetic. Suppose it never terminates - then you have the same issue - you can't decide if its true or not.

Interesting isn't it. This was a very famous theorem. But computing concepts makes it fairly easy to understand and along the way we have seen you have functions that can't be computed and that leads to Godel's famous result.

Or maybe I have goofed o:)o:)o:)o:)o:)o:)o:)o:)o:).

Thanks
Bill
 
Last edited:
Technology news on Phys.org
  • #2
I worry that maybe you're conflating completeness and decidability. They are related but not the same:
https://philosophy.stackexchange.co...-first-order-logic-complete-but-not-decidable
https://www.reddit.com/r/math/comments/6eul4m/how_are_completeness_and_decidability_different/
Essentially: decidability by itself doesn't imply completeness (nor vice versa): https://en.wikipedia.org/wiki/Decidability_(logic)#Relationship_with_completeness

bhobba said:
Let f(x) be a function from the natural numbers to either 0 or 1.
Just a side note: For any set ##A##, the set of all functions ##\mathcal{F}## from ##A\rightarrow \{0,1\}## is isomorphic to the power set of ##A## (This is straightforward to see: For each unique ##f_n\in\mathcal{F}##, the preimage of ##\{1\}## under ##f_n## will be a unique subset of ##A##, so ##\mathcal{F}## has a natural bijection with ##\mathcal{P}(A)##). The rest of the section on non-computable functions is essentially Cantor's diagonal argument (albeit in binary). I don't know how abreast Ramanujan was of set theory at the time; it certainly wasn't the type of stuff he's known for.
 
  • Like
Likes bhobba
  • #3
bhobba said:
Interesting isn't it. This was a very famous theorem. But computing concepts makes it fairly easy to understand and along the way we have seen you have functions that can't be computed and that leads to Godel's famous result.
Yes, what you have written is more or less right I think. I don't know fully how the details would be worked out (mostly due to not understanding fol well), but the idea is correct. Also see post#12 and post#14 in this thread:
https://www.physicsforums.com/threads/an-easy-proof-of-goedels-first-incompleteness-theorem.941469

Kleene thought of the idea of diagonalisation total recursive functions to create (total) non-recursive functions very early. But after that, the halting problem provided a more natural example of a (total) non-recursive function.

But there are two points worth pointing out on why this kind of proof works:
(1) The first is I think that the underlying symbolic manipulation that is required to work out all the (relevant) consequences/theorems following from the given axioms+inference-rules should be computable.
(2) The other is that, roughly, we should be able to recognise the string corresponding to our problem when we see it.For example, suppose we were trying to use halting problem to prove first incompleteness theorem. The basic idea is that we will start listing out all the possible theorems (specifically a certain "theorem generating" part of our full program will be doing this).
However, the program still needs to be able to recognise the right form of a statement that corresponds to halting of a program. But this can be written as something like:
∃x R(x)
Here R is a primitive recursive predicate (meaning it only returns 0 or 1 corresponding to false and true). As it turns out, fol (given the tools provided in incompleteness theorems) is powerful enough to describe a statement like this (which involves a primitive recursive predicate) ... and much more (and I think the statements you are describing in OP can also be specified in fol). Generally speaking R(x) can be thought of in the following way:
R(x) ≡ step(a,b,x)
Here "a" and "b" are constants and "step" is also a primitive recursive predicate (but in 3 input variables instead of 1).

Basically the step function can be thought of as:
step(program-index, input, time)
Here program-index is the index/godel-number of some program (in a fixed computational model). Input means the number given to the program as input. Time means the number of steps. The "step" function simply checks whether the given program (when given a certain input) has halted till the given number of steps (and returns 0 or 1 depending on it).

So, in summary, the given program (which presumable solves halting problem) simply keeps generating all the theorems (following from axioms+rules of inference). At the same time, it keeps checking all theorems of the form "∃x R(x)"(and "~∃x R(x)"). And whenever it finds one, it checks whether this matches the "program-index" and "input" given to it (both given as inputs). And if the matching is complete, it returns its answer based on what it found. But due to halting problem, we know that this given program that we have written (based on fol) can't be total ... and hence a version of the first incompleteness theorem follows.
Basically if we use an uncomputable other than halting problem only the recognition part in the last paragraph will change (other reasoning will stay similar).

Edit:
Also perhaps this is worth mentioning (in case it wasn't obvious), the truth of statement (where a and b are constants):
∃x step(a,b,x)
basically gives us the value of halt(a,b) ... where halt is the usual halting predicate (with two input variables).

As far as I understand (if I am not missing something), a difference between a proof of this kind and the usual proof of first incompleteness theorem is that this kind of proof (for example) just tells us that there is at least one statement of this kind that can't be proved/disproved in the given (sound) systems S. In case we use halting problem, for example it just tells us that there is at least some statements of the form "∃x R(x)" which S won't be able to prove/disprove. However, it doesn't give a specific statement which S can't prove/disprove. The original proof gives a specific (true) statement (assuming S is sound) which can't be proved by S.
 
Last edited:
  • Like
Likes bhobba
  • #4
TeethWhitener said:
Essentially: decidability by itself doesn't imply completeness (nor vice versa):.

Excellent point. I need to think about it a bit more - off the top of my head I think you are correct - I have proven there are statements you can't decide - but completeness is a bit different.

Added Later:
Most certainly I have proven decidability ie there are statements in the system that can not be decided. The program I alluded to, to proof check g statements, must not terminate for at least one x - which means strictly speaking saying g statements are either true or false for all x can't be decided.

Completeness says that if a formula is logically valid then there is a finite deduction (a formal proof) of the formula. Well by definition for all x g(x) = 1 or g(x) =0. But I have shown since g is not computable you can't show this in a finite number of steps so is not complete.

I think I may have shown both.

Thanks
Bill
 
Last edited:
  • #5
TeethWhitener said:
I worry that maybe you're conflating completeness and decidability.

I don't think he was. He was arguing that
  1. There is an uncomputable function ##g(x)## that takes on values ##0## or ##1##.
  2. If the theory were complete, then you could compute ##g(x)## by searching for a proof of either ##g(x) = 0## or ##g(x) = 1##
So uncomputability implies incompleteness.

Actually, there is a missing step in the proof, which is the argument that the statements ##g(x) = 0## and ##g(x) = 1## are actually formulas expressible in your theory. He gave an informal definition of ##g(x)##, so it really would turn on whether the informal description can be formalized (in the language of arithmetic, say).
 
  • #6
bhobba said:
But I have shown since g is not computable you can't show this in a finite number of steps so is not complete.
Is this true? Certainly the general ##g## isn't computable, but individual cases of ##g(x)=a## might be provable (cf. busy beaver function).
stevendaryl said:
So uncomputability implies incompleteness.
Is this true? (Syntactic) completeness is "validity implies a proof exists" (if ## \models \varphi## then ## \vdash \varphi##), computability (decidability) is "given a ##\varphi##, there exists a finite time algorithm to check its validity." I'm not sure how we can draw the inference from one to the other via @bhobba 's argument. Maybe there's something I'm missing.
 
  • Like
Likes bhobba
  • #7
TeethWhitener said:
Is this true? Certainly the general ##g## isn't computable, but individual cases of ##g(x)=a## might be provable (cf. busy beaver function).

https://en.wikipedia.org/wiki/Gödel's_incompleteness_theorems
First Incompleteness Theorem: "Any consistent formal system F within which a certain amount of elementary arithmetic can be carried out is incomplete; i.e., there are statements of the language of F which can neither be proved nor disproved in F."

I have shown that the statement ∀x g(x) = 0 or g(x) = 1, which is true by construction, but as the above says it doesn't have to be a true statement, it just has to be shown it can't be disproved or proved in the system, which I think I have shown.

I think one of the issues here is there may be some differences in the precise statement of the incompleteness theorem depending on the source. The above is from Wikipedia, which IMHO is actually a negative for me - they are far from always correct.

Other sources say the same thing so Wikipedia may be correct in this case.

Thanks
Bill
 
Last edited:
  • #8
stevendaryl said:
Actually, there is a missing step in the proof, which is the argument that the statements ##g(x) = 0## and ##g(x) = 1## are actually formulas expressible in your theory.

Sure - its not totally rigorous - which is either a plus in that it makes it easier to understand or a minus in that I haven't fully shown what I set out to do. Anybody can look up on Wikipedia the outline of a rigorous proof or a simple google search will give tons eg:
https://www.logicmatters.net/resources/pdfs/PartIII.pdf

This is tricky stuff that can confuse even some quite intelligent people. It has been noted my argument is a variant of Cantors diagonal augment - but some do not even accept it. Jstor's editor wrote an interesting article about it:
http://www.logic.univie.ac.at/~ykhomski/ST2013/Hodges.pdf

Ease of exposition giveth and taketh :-p:-p:-p:-p:-p:-p:-p

hanks
Bill
 
Last edited:
  • #9
TeethWhitener said:
Maybe there's something I'm missing.

I don't think you are missing anything - you are doing the correct thing and subjecting my proof to scrutiny - which of course it should be subjected to.

Thanks
Bill
 
  • #10
TeethWhitener said:
Is this true? (Syntactic) completeness is "validity implies a proof exists"

That's not the notion of "incompleteness" from Godel's Incompleteness Theorem. That's the notion from Godel's Completeness Theorem. :wink:

There is completeness of a logic, which is that every valid statement (true under all interpretations) is provable. But the sense in which Peano Arithmetic or ZFC is incomplete is different: It's incomplete if there are meaningful statements in the language of arithmetic that are neither provable nor refutable.

A definable, uncomputable function is proof of incompleteness in the latter sense.
From Wikipedia https://en.wikipedia.org/wiki/Gödel's_incompleteness_theorems

A set of axioms is (syntactically, or negation-) complete if, for any statement in the axioms' language, that statement or its negation is provable from the axioms (Smith 2007, p. 24). This is the notion relevant for Gödel's first Incompleteness theorem. It is not to be confused with semantic completeness, which means that the set of axioms proves all the semantic tautologies of the given language.
 
  • #11
bhobba said:
...
Anyway here is my computer science proof. Let f(x) be a function from the natural numbers to either 0 or 1. I will call them f functions.
...
Let g(i) = 1 - fi(i) where fi is the ith function in the ordering above.
Now suppose g is computable. If so we can find it in the list of computable functions - say it is number k. So g(k) = 1 - fk(k). If fk which is the same as gk is 1 or 0 it says gk is the opposite - contradiction. So we have come up with a very interesting computer science result - there are non computable functions. Strange but true.
...

I haven't read all of the replies, but it seams to me that you have also assumed ##g: \mathbb{N} \rightarrow \{0,1\}##, which isn't true is it? Haven't you just shown that ##g## isn't defined over all of ##\mathbb{N}##, i.e. there are non-##f## functions, rather than the existence of a non-computable ##f##-function.
 
  • #12
Jarvis323 said:
I haven't read all of the replies, but it seams to me that you have also assumed ##g: \mathbb{N} \rightarrow \{0,1\}##, which isn't true is it? Haven't you just shown that ##g## isn't defined over all of ##\mathbb{N}##, i.e. there are non-##f## functions, rather than the existence of a non-computable ##f##-function.

By definition, ##g(n) = 1 - f_n(n)##, where ##f_n(x)## is the ##n^{th}## total computable function that always returns either 0 or 1. I don't see how ##g(n)## could fail to be a function from ##N## into ##\{0,1\}##.

It's clear that ##g## cannot be equal to ##f_n##, for any value of ##n##. Since ##f_n## enumerates all computable functions from ##N## into ##\{0, 1\}##, it follows that ##g## cannot be computable.
 
  • #13
stevendaryl said:
By definition, ##g(n) = 1 - f_n(n)##, where ##f_n(x)## is the ##n^{th}## total computable function that always returns either 0 or 1. I don't see how ##g(n)## could fail to be a function from ##N## into ##\{0,1\}##.

It's clear that ##g## cannot be equal to ##f_n##, for any value of ##n##. Since ##f_n## enumerates all computable functions from ##N## into ##\{0, 1\}##, it follows that ##g## cannot be computable.
Can't you also argue that ##g## must be computable since ##f_n## is computable for all ##n##. The algorithm for ##g(k)## just first computes ##f_k(k)## and subtracts it from 1. Edit: Well I guess you need to find the ##k^{th}## computable function as well...
 
Last edited:
  • #14
Jarvis323 said:
Can't you also argue that ##g## must be computable since ##f_n## is computable for all ##n##.

What's not computable is finding ##f_n## given ##n##.

Let's suppose that you have some way of generating all possible computer programs in some kind of order. But a computer program can get into an infinite loop. So out of that list of all computer programs, you have to strike off the ones that don't halt, for some particular input. But there's no way to know which ones halt, and which ones don't.
 
  • Like
Likes bhobba
  • #16
bhobba said:
To Jarvis its just Cantors diagonal argument in another guise:
https://en.wikipedia.org/wiki/Cantor's_diagonal_argument

Thanks
Bill

It's interesting to me if you think about it that way, because of what it seams to also imply in this context.

You could start by supposing you have an enumeration of the f-functions in the form of their mappings ( e.g. 010000... is the function that maps 1 to 0, and everything else to 0), and then construct an f-function not on the list, proving they are uncountable. Then use pigeon hole principle to show there are uncomputable functions.

But starting with an enumeration of the computable functions is interesting. Of course you can use the same procedure to construct an f-function not on the list. But this proves more than just that there are uncomputable functions. It also shows that you cannot order the computable functions in any systematic way such that knowing ##k##, allows you to know ##f_k(k)##. This seams clear if you are enumerating them in correspondence with the sorted programs that compute them, but you could also try to imagine other systematic ways of ordering the functions themselves based on their mappings. It seams sort of intuitive that this could be the case, because the number of possible mappings grows larger than the number of functions, so maybe you are given too small an amount of information to deduce the larger amount of information. On the other hand, you really only need the system to give away that one new bit per element. For example, there should be no way to list the computable functions in the form, such that the ##x_i##'s are each computable as functions of ##i##.

##
\{0,1\}^0x_0\{0,1\}^{\infty}\\
\{0,1\}^1x_1\{0,1\}^{\infty}\\
\{0,1\}^2x_2\{0,1\}^{\infty}\\
\{0,1\}^3x_3\{0,1\}^{\infty}\\
...
##
 
  • #17
Jarvis323 said:
It also shows that you cannot order the computable functions in any systematic way

Of course you can eg order their computer programs alphabetically. To be specific I am invoking the well ordering theorem.

Thanks
Bill
 
Last edited:
  • #18
bhobba said:
Of course you can eg order their computer programs alphabetically.

Thanks
Bill

such that knowing a function ##f##'s position ##k## gives away ##f(k)##
 
  • #19
Jarvis323 said:
such that knowing a function ##f##'s position ##k## gives away ##f(k)##

The well ordering theorem guarantees you can which is equivalent to the axiom choice which most mathematicians accept. If you are one of the few that do not - then my proof falls over. Of course so do a myriad of other proofs that depend on it such that every vector space has a basis and we are in deep do do, which is why most accept it. Still you can chose to be in deep do do if you want.

I remember my analysis professor eyes rolled back and said of a proof he did in a lecture - to be fully rigorous you will need the axiom of choice and going down that path will not lead anywhere fruitful except in a course about the foundations of math - which it wasn't. If you want to doubt that I suggest you start a thread in the set theory and logic sub-forum. Its not really my bag - but we likely have some there who can discuss it with you.

Thanks
Bill
 
  • #20
bhobba said:
The well ordering theorem guarantees you can which is equivalent to the axiom choice which most mathematicians accept. If you are one of the few that do not - then my proof falls over.

The issue is not whether the computable functions can be ordered---of course they can. The issue is that the ordering is not itself computable. That is, the function ##\mathcal{C}(n)## which given ##n## returns the computer program for the ##n^{th}## computable function is not a computable function. If it were, then the function ##g(n) = 1 - f_n(n)## would be computable.

That's how I interpreted the line

...such that knowing a function ##f##'s position ##k## gives away ##f(k)##
 
  • #21
stevendaryl said:
That's how I interpreted the line

Sure a program to sort them alphabetically will never terminate since their number is infinitely countable it is not computeable. But its irrelevant to my argument. You can put them in 1-1 correspondence with the natural numbers by the well ordering theorem.

Thanks
Bill
 
  • #22
bhobba said:
Sure a program to sort them alphabetically will never terminate since their number is infinitely countable it is not computeable. But its irrelevant to my argument. You can put them in 1-1 correspondence with the natural numbers by the well ordering theorem.

But the issue was Jarvis saying

Can't you also argue that ##g## must be computable, since ##f_n## is computable for all ##n##?

He's not disputing that the computable functions can be ordered, he's asking why that doesn't imply that your ##g## is computable. So the well-ordering theorem does not seem relevant to that issue.
 
  • #23
stevendaryl said:
He's not disputing that the computable functions can be ordered, he's asking why that doesn't imply that your ##g## is computable. So the well-ordering theorem does not seem relevant to that issue.

Because it leads to a contradiction. If it was computeable, there would be a k that has opposite values for the function associated with it.

Thanks
Bill
 
  • #24
bhobba said:
Because it leads to a contradiction. If it was computeable, there would be a k that has opposite values for the function associated with it.

Yes, I know that. I'm saying that that the well-ordering theorem is not relevant to that conclusion.
 
  • #25
stevendaryl said:
That's not the notion of "incompleteness" from Godel's Incompleteness Theorem. That's the notion from Godel's Completeness Theorem.
Yes. My mistake.

stevendaryl said:
But there's no way to know which ones halt, and which ones don't.
The crux of my objection was here. Consider the busy beaver function, (##BB(n)## = the maximum number of steps that an ##n##-state Turing machine takes before it halts, excluding non-halting TM's) which is by definition non-computable (equivalent to the halting problem). However, given an ##n##, we can find (and have found) at least a few specific values of ##BB(n)##, for instance ##BB(4)=107##. The whole reason we can do that is because we can enumerate all 4-state Turing machines and exhaustively show that any of these TM's that runs for more than 107 steps eventually enters a loop (and therefore never halts). So even though the busy beaver function isn't computable, we can still find ##BB(n)## for specific values of ##n##.
 
  • #26
TeethWhitener said:
Yes. My mistake.The crux of my objection was here. Consider the busy beaver function, (##BB(n)## = the maximum number of steps that an ##n##-state Turing machine takes before it halts, excluding non-halting TM's) which is by definition non-computable (equivalent to the halting problem). However, given an ##n##, we can find (and have found) at least a few specific values of ##BB(n)##, for instance ##BB(4)=107##. The whole reason we can do that is because we can enumerate all 4-state Turing machines and exhaustively show that any of these TM's that runs for more than 107 steps eventually enters a loop (and therefore never halts). So even though the busy beaver function isn't computable, we can still find ##BB(n)## for specific values of ##n##.

Yes, there are certainly cases where you can prove that a program definitely does halt on certain inputs, and there are cases where you can prove that a program definitely does not halt on certain inputs. But to get a listing of all computable programs, you need to be able to tell, for an arbitrary program, whether it halts on all possible inputs.
 
  • #27
baobab said:
Sure a program to sort them alphabetically will never terminate since their number is infinitely countable it is not computeable. But its irrelevant to my argument. You can put them in 1-1 correspondence with the natural numbers by the well ordering theorem.

I wasn't trying to say your original argument is invalid. Just found it interesting what else it proves. I don't think not being able to find the index of a computable function in an enumeration of them invalidates your proof. It's just an interesting related fact.
TeethWhitener said:
Yes. My mistake.The crux of my objection was here. Consider the busy beaver function, (##BB(n)## = the maximum number of steps that an ##n##-state Turing machine takes before it halts, excluding non-halting TM's) which is by definition non-computable (equivalent to the halting problem). However, given an ##n##, we can find (and have found) at least a few specific values of ##BB(n)##, for instance ##BB(4)=107##. The whole reason we can do that is because we can enumerate all 4-state Turing machines and exhaustively show that any of these TM's that runs for more than 107 steps eventually enters a loop (and therefore never halts). So even though the busy beaver function isn't computable, we can still find ##BB(n)## for specific values of ##n##.

But for a single program to accompish this for the general case, you may need infinitely long source code.

But good point. It doesn't need to be the case that given ##k## you cannot find ##f_k(k)##. It's just the case that you cannot have one algorithm that can find ##f_k(k)## for all ##k##. As you iterate, you would need to (at least) keep creating new logic in order to continue your success. All in all, you need to be infinitely creative so to speak. I think.
 
Last edited:
  • #28
Just to make a (relatively simple) point clearer ... once we have set up the specifics of our programming language then indeed we can put all the (syntactically valid) programs in one-to-one correspondence with ##\mathbb{N}##. And indeed there is a computer program that can then calculate the function ##g : \mathbb{N} \rightarrow \mathbb{N}## defined as:
##g(x)=f_x(x)+1## ---- if ##f_x(x)## is defined
##g(x)=##undefined ---- if ##f_x(x)## is undefined***

Here ##f_x## corresponds to the (partial recursive) function computed by program with index ##x##. One can observe that the function ##g## is a partial recursive function but not a (total) recursive function.

The problem is of course that the 1-1 correspondence was between programs that compute partial computable functions and ℕ. The correspondence was NOT between programs computing (total) computable functions and ℕ. This is quite customary (and standard usage) that even when one says partial computable function, one is "really" talking about a function that may either be:
-- total
-- not total

I think this is slightly against the convention taken in mathematics more generally where partial function is specifically taken to mean a non-total function .. but anyway.

***When we say that a given partial computable function ##g## is undefined on a given input value (say ##a##), what we simply mean is that the program computing ##g## doesn't halt on the given input.EDIT: Edited the post to make it clearer.
 
Last edited:
  • #29
SSequence said:
Just to make a (relatively simple) point clearer ... once we have set up the specifics of our programming language then indeed we can put all the (syntactically valid) programs in one-to-one correspondence with ##\mathbb{N}##. And indeed there is a computer program that can then calculate the function ##g : \mathbb{N} \rightarrow \mathbb{N}## defined as:
##g(x)=f_x(x)+1## ---- if ##f_x(x)## is defined
##g(x)=##undefined ---- if ##f_x(x)## is undefined***

With that modified definition of ##g##, it is (partial) computable. The original argument that proved that ##g(x)## can't be on the list was:

For any ##n##, ##g(x)## cannot be equal to ##f_n(x)##, because then ##g(n)## would be equal to ##f_n(n)##, while it's been defined to be: ##1 - f_n(n)##.

But that's not a contradiction if you interpret the equals sign loosely: ##g(n) = 1 - f_n(n)## means either that both sides are equal to the same number, or that both sides are undefined.
 
  • #30
Yes ofc you are right. I wasn't disputing the original argument (in OP) indeed. I was just pointing out that:
1--- Even though we can't computably do (what was described in OP) for total recursive functions, we can indeed do it for partial recursive functions (but diagonalisation fails in that case of course).
2--- I was also trying to point out (somewhat implicitly) that even in that case one is only computably listing the outputs of programs and hence there is infinite repetition of a given partial recursive function on such a list.
Of course, repetition or not, this is still provably impossible to do (computably) for total computable functions (as in OP). Indeed that's simply because repetitions of a specific function on the list is irrelevant for diagonalisation to work in that case ... it just has to be on the list at least once.
 
Last edited:
  • #31
you lost me here.

bhobba said:
Well all computer programs are countable because they use a finite number of characters and are of finite length.

Who wrote the internet? What is a computer program if not the internet? Is it done? It's like you are drawing a line across some joint of a thing and then counting the bones on one side.
 
  • #32
Jimster41 said:
What is a computer program if not the internet?

I suggest you look into Turing Machines.

Thanks
Bill
 
  • #33
Jimster41 said:
you lost me here.
Who wrote the internet? What is a computer program if not the internet? Is it done? It's like you are drawing a line across some joint of a thing and then counting the bones on one side.
As @bhobba points out, you do not seem to have the technical background yet to be throwing out a philosophical question in this technical thread. Please do some reading about the technical aspects of this discussion, and then if you have pertinent questions, go ahead and post them in context. Thank you.
 
  • Like
Likes bhobba

What are non-computable functions?

Non-computable functions are mathematical functions that cannot be computed by a computer or any other algorithmic method. These functions are also known as uncomputable or undecidable functions.

What is Godel's theorem?

Godel's theorem, also known as Godel's incompleteness theorem, is a mathematical theorem that states that in any formal system of mathematics, there will always be true statements that cannot be proven within that system. This means that there are limits to what can be proven or known through mathematical logic.

How does Godel's theorem relate to non-computable functions?

Godel's theorem is closely related to non-computable functions because it shows that there are mathematical statements that cannot be proven or computed by any algorithm. This means that there are mathematical problems that are unsolvable, even by a computer.

What are some examples of non-computable functions?

Some examples of non-computable functions include the halting problem, the Busy Beaver function, and the Goodstein function. These functions cannot be computed by a computer because they require an infinite number of steps to reach a solution.

Why is the concept of non-computable functions important?

The concept of non-computable functions is important because it challenges the idea that all mathematical problems can be solved by an algorithm or a computer. It also has implications for the limitations of human knowledge and the nature of truth in mathematics.

Similar threads

  • Programming and Computer Science
Replies
1
Views
1K
  • Programming and Computer Science
Replies
29
Views
3K
  • Programming and Computer Science
2
Replies
63
Views
3K
  • Programming and Computer Science
Replies
1
Views
2K
  • Programming and Computer Science
Replies
1
Views
735
  • Programming and Computer Science
Replies
22
Views
730
  • Set Theory, Logic, Probability, Statistics
2
Replies
54
Views
3K
Replies
2
Views
764
  • Programming and Computer Science
Replies
12
Views
1K
  • Programming and Computer Science
Replies
1
Views
1K
Back
Top