# Understanding the recursion theorem

1. Feb 19, 2012

### martijnh

Hi all,

Im doing some self study in set theory, but got kind of stuck with a proof my textbook gives about the so called recursion theorem:

What I get from this is:
Let $\phi$ be a function that maps the result of a function that maps natural numbers to the set a, to the result of another function that does the same.

If you'd pick two functions from the function set a$^{\mathbb{N}}$ and these return the same result for an element n, then the set of functions $\phi$ would yield the same given f or g using n as input.

They then continue to use this to show there is a so called fixpoint in the set of functions $\phi$ which yields itself given itself as input to $\phi$. But I don't see how this follows from the facts described above...

Then they go on proving there can be only one such a fixpoint in the set of functions $\phi$, but this makes even less sense to me:

I was hoping someone could explain this theorem and proof in a somewhat less abstract fashion and maybe point out flaws in my reasoning?

Any help is greatly appreciated!

Thanks!

Martijn

2. Feb 20, 2012

### AKG

They construct the fixed point of $\phi$. They call this fixed point $L_0$, and they give you a recursive definition of this function, although admittedly their description is a bit confusing because it involves this intermediate definition of the functions $\phi_n$. Let's make sure you understand the basics first before understanding the construction of $L_0$. I.e. make sure you understand what I've written above, the corrections and additions to your statements. Be especially clear about the fact that $a^{\mathbb{N}}$ is a set of functions, and a function $a^{\mathbb{N}}\to a^{\mathbb{N}}$ is a function whose inputs and outputs are functions! Meaning whole functions are being treated as individual objects, which can serve as inputs and outputs of yet other functions. You should also be clear that the $\phi$ in the statement of the theorem is not a particular function $\phi$. Rather, it says that if $\phi$ is any function of functions which has a particular nice property, then it has a unique fixed point.

3. Feb 20, 2012

### martijnh

Thank you very much, that certainly helped!

So the corrected line of thinking should be; If there are two functions in a set of functions which return the same results for input restricted up to n, then the set of functions yield the same result on either f(n) or g(n).

I think I get the statement of the theorem; Given a set of functions there exist at most one input (a function) on the set of functions that yields this same input (the same function). And I think I understand the proof by contradication they provide to show there can be only 1 such function.

But i'm still lost in the details:
L$\phi$(n+) maps n+ to a
$\phi$n+ is the set of functions that map n+ to a
L$\phi$|n+ is the set of results of L$\phi$ from 0 up to n

So how can L$\phi$(n+) be equal to $\phi$n+(L$\phi$|n+) ?

Thanks again!

Martijn

4. Feb 20, 2012

### SteveL27

You might find it helpful to visualize a function from N->X is a sequence of elements of X. That lets you see more clearly what they're getting at.

5. Feb 21, 2012

### martijnh

Hmmm if I'd simplify the case and substitute the set of functions by a function that maps $\mathbb{N}$ to $\mathbb{N}$, and the function inputs f & g by elements from $\mathbb{N}$... it would basically say something very trivial; a function maps each input to one output, so if the input equals the output, that would be the fixpoint??

6. Feb 21, 2012

### AKG

Hmmm, what you're saying definitely isn't right. You're not mentioning $\phi$ at all.

We've got our set of natural numbers $\mathbb{N} = \{ 0, 1, 2, \dots \}$. And then we've got some other random set $a$. Now we can talk about functions from $\mathbb{N}$ to $a$. We can imagine the collection of all such functions, and we call said collection $a^{\mathbb{N}}$. If $f\in a^{\mathbb{N}}$, then $f$ inputs natural numbers $n$, and outputs something $f(n) \in a$.

Next, we can think about these sort of monster functions, which eat up entire functions as input (i.e. their inputs are not merely numbers, but entire functions), and spit out entire functions. We're interested now in talking about monsters $\phi$ which input elements in $a^{\mathbb{N}}$ and output elements in $a^{\mathbb{N}}$.

$$\phi \in \left(a^{\mathbb{N}}\right)^{a^{\mathbb{N}}}$$

or, equivalently:

$$\phi : a^{\mathbb{N}} \to a^{\mathbb{N}}$$

But we're not interested in any old $\phi$. We're only interested in a certain kind of $\phi$, those that have a certain nice property. The Recursion Theorem says that if $\phi$ is any monster which has some particular nice property, then $\phi$ has a "fixed point," namely some input $f$ such that $\phi(f) = f$.

$$\forall \phi \in \left(a^{\mathbb{N}}\right)^{a^{\mathbb{N}}}\ \ \left(\phi\mbox{ nice }\Rightarrow \exists f \in a^{\mathbb{N}} : \phi(f) = f\right)$$

The first question is, what is this "nice property?" We'll say $\phi$ is nice if for any two functions $f,g\in a^{\mathbb{N}}$, if it so happens that for some $n\in\mathbb{N}$

$$f(0) = g(0), f(1) = g(1), \dots, f(n-1) = g(n-1)$$

(i.e. $f$ and $g$ "agree up to $n$") then when the monster eats up $f$ and $g$, the resulting functions it spits out "agree at $n$), i.e. $\phi(f) (n) = \phi(g) (n)$.

Definition: For $\phi : a^{\mathbb{N}} \to a^{\mathbb{N}}$, we say $\phi$ is "nice" iff:

$$\forall f,g\in a^{\mathbb{N}} \forall n \in \mathbb{N} \left [\left(f(0) = g(0), \dots , f(n-1) = g(n-1)\right) \Rightarrow \phi(f) (n) = \phi(g) (n)\right]$$

There will be infinitely many nice monsters, and infinitely many not-nice monsters (assuming $|a|>1$).

Again, let me stop here and make sure you've understood all this correctly before I go on and explain the proof of the theorem. There's a lot more to say about better ways to understand functions $\mathbb{N} \to a$, better way to understand nice monsters $a^{\mathbb{N}} \to a^{\mathbb{N}}$, and understanding how the Recursion Theorem justifies the process of "defining functions by recursion" (e.g. you could define the Fibonacci function by $f(0) = 0, f(1) = 1, f(n+2) = f(n+1) + f(n)$, but how do you know there exists a unique function satisfying this definition? It seems obvious, but the rigorous answer is: The Recursion Theorem). But let's make sure we're on the same page with the basics first.

Last edited: Feb 21, 2012
7. Feb 23, 2012

### martijnh

Thank you very much for this lucid explanation, I wish my textbook was that clear!

I still wonder about the proof and the practical usefullness of the theorem though!

PS: I did not mention $\phi$ in my previous post as I thought Steve suggested I should look at it in a simplified way; Where $\phi$ would be analogous to a function $\mathbb{N}$ $\rightarrow$ $\mathbb{N}$ and the input functions f & g analogous to elements of $\mathbb{N}$ (so comparing the construct to a regular function). But I now understand the gist of the recusrsion theorem to be about a property of functions.

Last edited: Feb 23, 2012