Challenge Math Challenge by Erland #2

Greg Bernhardt

Admin
Submitted and judged by: @Erland
Solution Credit: @SSequence for 2a, 2B, C, D

RULES:
1) In order for a solution to count, a full derivation or proof must be given. Answers with no proof will be ignored.
2) It is fine to use nontrivial results without proof as long as you cite them and as long as it is "common knowledge to all mathematicians". Whether the latter is satisfied will be decided on a case-by-case basis.
3) If you have seen the problem before and remember the solution, you cannot participate in the solution to that problem.
4) You are allowed to use google, wolframalpha or any other resource. However, you are not allowed to search the question directly. So if the question was to solve an integral, you are allowed to obtain numerical answers from software, you are allowed to search for useful integration techniques, but you cannot type in the integral in wolframalpha to see its solution.

CHALLENGE:
Definitions:
The following functions from $\mathbb N^k$ to $\mathbb N$, with $k\ge 0$, are called basic functions:

B1) The zero constant function $Z:\mathbb N^0\to \mathbb N$, given by $$Z()=0$$(so $Z$ is actually the constant $0$).

B2) For all $n\ge 0$ and $i$ such that $1\le i\le n$, the i:th n-ary projection function $P_i^n:\mathbb N^k\to\mathbb N$, given by $$P_i^n(x_1, x_2,\dots, x_n)=x_i,$$ for all $x_1,x_2,\dots,x_n\in \mathbb N$.

B3) The successor function $S:\mathbb N^1\to \mathbb N$, given by $$S(x_1)=x_1+1,$$ for all $x_1\in \mathbb N$.

We obtain new functions from old ones by applying the following operations:

O1) Composition: For all $m,n\ge 0$ and all functions $g:\mathbb N^m\to \mathbb N$ and $h_1,h_2,\dots, h_m:\mathbb N^n\to \mathbb N$, we obtain the function $f:\mathbb N^n\to \mathbb N$, given by
$$f(x_1,x_2,\dots, x_n)=g(h_1(x_1,x_2,\dots,x_n),h_2(x_1,x_2,\dots,x_n),\dots,h_m(x_1,x_2,\dots,x_n)),$$
for all $x_1,x_2,\dots,x_n\in \mathbb N$.

O2) Primitive recursion: For all $n\ge 0$ and all functions $g:\mathbb N^n\to\mathbb N$ and $h:\mathbb N^{n+2}\to\mathbb N$, we obtain the (unique) function $f:\mathbb N^{n+1}\to\mathbb N$ which satisfies the following conditions:

i)
$$f(x_1,x_2,\dots,x_n,0)=g(x_1,x_2,\dots,x_n),$$
for all $x_1,x_2,\dots,x_n\in\mathbb N$,

ii)
$$f(x_1,x_2,\dots,x_n,x_{n+1}+1)=h(x_1,x_2,\dots, x_n, x_{n+1},f(x_1,x_2,\dots,x_n,x_{n+1})),$$
for all $x_1,x_2,\dots,x_n,x_{n+1}\in\mathbb N$.

A function $f:\mathbb N^n\to \mathbb N$, for any $n\ge 0$, is called primitive recursive if it can be obtained from the basic functions B1-B3 by finitely many applications of the operations O1 and O2.

Ackermann's function is the (unique) function $A:\mathbb N^2\to \mathbb N$ which satisfies the following conditions:

a) $A(0,n)=n+1$,
b) $A(m+1,0)=A(m,1)$,
c) $A(m+1,n+1)=A(m,A(m+1,n))$,

for all $m,n\in \mathbb N$.

a) Prove that the factorial function $n!$ is primitive recursive.
b) Prove that the "truncated subtraction" $f(a,b) = \max\{a-b,0\}$ is primitive recursive.
c) Prove that to each primitive recursive function $f:\mathbb N^n\to \mathbb N$, for any $n\ge 0$, there is an $m\in\mathbb N$ such that
$$f(x_1,x_2,\dots,x_n)\le A(m, \max_{1\le i\le n} x_i),$$
for all $x_1,x_2,\dots, x_n\in\mathbb N$.
d) Prove that $A$ is not primitive recursive.

Last edited:

Erland

Science Advisor
It should be emphasized that to solve these problems, one needs to establish several auxiliary results. This will take quite a lot of effort.

SSequence

Here is the answer to first two parts. To be honest, the first two parts are quite easy compared to the third and fourth.

But I think because the solutions I am posting are in a slightly different form from one usually tends to encounter (in form of equations) --- therefore it might be interesting enough to post.
-- x, x1, x2,...etc. denote input variables
-- y denotes output variable
-- all variable values start from 0 by default (also no need to declare etc.)
-- all variables have local scope (limited to the given function)

(b)

function assign // one argument
y=0
Loop x
y=y+1
End
End

function subtract1 // one argument
t=0
y=0
Loop x
Loop t
y=y+1
End
t=0
t=t+1​
End
End

function subtract // two arguments
y=assign(x1)
Loop x2
y=subtract1(y)​
End
End

(a)
function multiply // two arguments
Loop x1
Loop x2
y=y+1
End
End
End

function factorial // one argument
y=0

y=y+1
Loop x
y=multiply(x,y)
x=subtract1(x)
End
End

===========

I think I probably (not fully sure) have seen the proof for the third and fourth parts but, as I re-call, I was only able to follow it mechanically without really understanding the intuition behind it. And I think it would be sort of cheating anyway to "look up" a proof and then post it.

Perhaps it might be a good idea just to consider the part(c) for just simple values such as n=1 and n=2 first I suppose?

Erland

Science Advisor
Here is the answer to first two parts. To be honest, the first two parts are quite easy compared to the third and fourth.

But I think because the solutions I am posting are in a slightly different form from one usually tends to encounter (in form of equations) --- therefore it might be interesting enough to post.
-- x, x1, x2,...etc. denote input variables
-- y denotes output variable
-- all variable values start from 0 by default (also no need to declare etc.)
-- all variables have local scope (limited to the given function)

(b)

function assign // one argument
y=0
Loop x
y=y+1
End
End
...
Please, try to express your functions in terms of the given basic functions and operations. It is not entirely clear how your programs relate to these.

c) and d) requires a lot more work, for sure.

SSequence

Please, try to express your functions in terms of the given basic functions and operations. It is not entirely clear how your programs relate to these.
Well I could just write down the equations but I personally find this to be easier to remember.

As for your comment, well as an example look at the function "subtract" above. It uses two other functions (and both of those are used at only one place). It is fairly clear that with variable re-naming you can easily convert this to just one main function without using macros (only that would be much less transparent).

In general, as long as the dependence of function calls (for functions that are used in our main function that is) is acyclic this can easily be shown to be true in general here.

As for justification for using programs this is already fairly well-known result (equivalence of "programs" and "functions"). In fact here to show that given functions are p.r. actually only reduction from "programs" to "functions" is strictly required (and not the other way around, even though that reduction is correct too).

EDIT:
Alright I briefly looked up the "program" to "functions" proof that I read some time ago (about 1.5 years). Your point is fair in the sense that the conversion proof already uses a lot of difficult steps and p.r. functions that are complicated enough (much more so than ones in (a) and (b)).

I just wanted to emphasize that once you have accepted the equivalence of "functions" and "programs", the approach of using programs is much simpler. This is especially true for functions that are far more complicated than ones in (a) and (b). This would also be true for functions more complicated than ones that are used in the proof of the equivalence (though I have never really seen any one use such p.r. functions).

Last edited:

Erland

Science Advisor
Yes, all these results are well known, but I would like a proof based on the formalism given. I don't want to translate between a program and an equation approach, unless you prove some general theorem about the equivalence between these approaches. But this is unnecessary.

Here is an example of proofs I would prefer, regarding your assign functions:

For each $k\ge 0$, the constant with $0$ arguments $a_k^0$, given by $a_k^0()=k$, is primitive recursive.
This is clear for $k=0$, since $a_0^0$ is the basic function $Z$.
For each $k\ge 0$, if $a_k^0$ is primitive recursive, so is $a_{k+1}^0$, since it could be obtained by O1 (composition) with $S$ (basic function) as $g$ and $a_k^0$ as $h_1$ ($m=1$ and $n=0$ here).
By induction, $a_k^0$ is primitive recursive for all $k\ge 0$.

These are functions with $0$ arguments, i.e. constants. Also, the constant functions with any number of arguments are primitive recursive: $a_k^m:\mathbb N^m\to \mathbb N$ ($m\ge 0$) given by $a_k^m(x_1,x_2,\dots,x_m)=k$. We see this if we apply O1 with $a_k^0$ as $g$ and $n=0$ (thus there are $0$ $h$-functions in this case).
If one is suspicious against this kind of "vacuous" arguments, a proof using O2 (recursion) can be given instead.

SSequence

Yes, all these results are well known, but I would like a proof based on the formalism given. I don't want to translate between a program and an equation approach, unless you prove some general theorem about the equivalence between these approaches. But this is unnecessary.

Here is an example of proofs I would prefer, regarding your assign functions:
....
The "assign" function is the identity function f(x)=x.

Here is function in (b) "subtract" using function equations:
subtract1(0) = Z()
subtract1(x+1) = P21( x , subtract1(x) )

subtract(x1,0) = P11(x1)
subtract(x1,x2+1) = h1(x1,x2, subtract(x1,x2) )

Here h1 is the function (using composition):
h1(x1,x2,x3) = subtract1( P33(x1,x2,x3) )

here, for example, P21 means n=2 and i=1 (and similar for other projection functions).

Erland

Science Advisor
The "assign" function is the identity function f(x)=x.

Here is function in (b) "subtract" using function equations:
subtract1(0) = Z()
subtract1(x+1) = P21( x , subtract1(x) )

subtract(x1,0) = P11(x1)
subtract(x1,x2+1) = h1(x1,x2, subtract(x1,x2) )

Here h1 is the function (using composition):
h1(x1,x2,x3) = subtract1( P33(x1,x2,x3) )

here, for example, P21 means n=2 and i=1 (and similar for other projection functions).
That's the way I like it! :)
Good, you'll get the credit for b).
You surely can do a) (factorial) also, in a few steps...

For c) and d), one needs (as far as I know) to prove some general properties of Ackermann's function, and use these.

SSequence

Yeah I think I should complete (a) now since I have already written the answer for (b).

But anyway, probably going to write (c) first on a piece of paper (otherwise I am going to forget the statement) so I can ponder a bit in free/spare time. However, probably I won't be able to solve it, so experts can write a solution for (c) and (d) if they want to.

Anyway, coming back to (a) first, lets first write down some preliminary functions for factorial. I am going to label all the auxiliary functions h2,h3,h4, etc.

As a preliminary, first let's make the following two functions (both are going to be required for factorial I think):
add(x1,x2) = x1 + x2
multiply(x1,x2) = x1*x2

We have:
add(x1,0) = P11(x1)
add(x1,x2+1) = h2(x1,x2,add(x1,x2))

where intuitively we have:
h2(x1,x2,x3) = x3+1
Here is the equation for h2 (using composition):
h2(x1,x2,x3) = S(P33(x1,x2,x3))
S is just the successor function here.

For multiplication we have:
multiply(x1,0) = Z( )
multiply(x1,x2+1) = h3(x1,x2,multiply(x1,x2))

where intuitively we have:
h3(x1,x2,x3) = x1 + x3
So now we have to write the equations for h3:
h3(x1,x2,0) = P21(x1,x2)
h3(x1,x2,x3+1) = h4(x1,x2,x3,h3(x1,x2,x3))

where intuitively we have:
h4(x1,x2,x3,x4) = x4+1
Using composition we can write this in required form:
h4(x1,x2,x3,x4) = S( P44(x1,x2,x3,x4) )

edit: there might perhaps be minor errors here and there. Going to complete the required function in next post.

SSequence

Now we have:
factorial(0) = one( )
factorial(x+1) = h5(x,factorial(x))

here we have two new functions, "one" and "h5". Let's first note that we have (by composition):
one( ) = S( Z( ) )
Intuitively h5 is the functions:
h5(x1,x2) = (x1+1)*x2
Writing the equations:
h5(x1,0) = Z( )
h5(x1,x2+1) = h6(x1,x2,h5(x1,x2))

where intuitively we have:
h6(x1,x2,x3) = x1+x3+1
Writing the equations:
h6(x1,x2,0) = h7(x1,x2)
h6(x1,x2,x3+1) = h4(x1,x2,x3,h6(x1,x2,x3))

The functions h4 was already described in previous post. Let's note that intuitively h7 is the function:
h7(x1,x2) = x1+1
Writing the equation for h7(using composition):
h7(x1,x2) = S( P21(x1,x2) )

EDIT: So add and multiply functions weren't explicitly used it seems.

I was thinking that the simplest definition should be:
factorial(0) = 1
factorial(x+1) = multiply(S(x),factorial(x))

and similarly for multiplication:
multiply(x,0) = 0
multiply(x, y+1) = add(x,multiply(x,y))

It seems that even though both definitions don't fit directly into required form for p.r. functions there would probably be results that help us to incorporate these kind of definitions too (but I don't remember anything about it though).

Last edited:

Erland

Science Advisor
Very good, Ssequence, the credit for a) is yours.

A couple of years ago, I used to write definitions of this kind i Maple, and soon get lost by all these auxiliary functions. For no reason at all except fun (). I can't do this anymore since I don't have a free version of Maple.

Last edited:

Erland

Science Advisor
I was thinking that the simplest definition should be:
factorial(0) = 1
factorial(x+1) = multiply(S(x),factorial(x))

and similarly for multiplication:
multiply(x,0) = 0
multiply(x, y+1) = add(x,multiply(x,y))

It seems that even though both definitions don't fit directly into required form for p.r. functions there would probably be results that help us to incorporate these kind of definitions too (but I don't remember anything about it though).
Something like this perhaps:

add13(x,y,z)=add(P31(x,y,z),P33(x,y,z))

multiply(x,0)=0
multiply(x,y+1)=add13(x,y,multiply(x,y))

S21(x,y)=S(P21(x,y))

multiaux(x,y)=multiply(S21(x,y),P22(x,y))

factorial(0)=one()
factorial(x+1)=multiaux(x,factorial(x))

Still a lot of auxiliary functions, but the advatage is that these can be defined just by composition.

Science Advisor

SSequence

A rough idea it seems would probably be to compare the following two:
(i) The columns of the function A(x,y). Given the answer posted to the highschool question, it seems that the column on the right always dominates (grows faster in eventuality) a column on the left.
(ii) The set of p.r. functions whose "derivation tree" has length less than or equal to a given finite value.

If one can establish a link between these two, then the result might be less daunting to prove.

Erland

Science Advisor
When I proved c) for myself, I used induction on the complexity of the functions: I first proved that the statement is true for the basic functions, then I proved that if the statement holds for the functions a composite function is built up from, then it holds for the composite function too, and also, if it is true for the two functions used in a recursion application, then it also holds for function defined by this recursion.

Perhaps you could try this approach too...

And then, it is not hard to prove that d) follows from c).

Last edited:

SSequence

Well I didn't really get a lot of time to work on this problem further after my last post here.

Looking a bit further, it seems that the variable m in part(c) in the OP should correspond to in "some" sense the depth of derivation tree.

As I mentioned about columns in the previous post. If we define Cm:N→N:
Cm(x) = A(m,x)
Then we have from the definition of A:
Cm+1(0) = Cm(1)
Cm+1(x+1) = Cm( Cm+1(x) )

And we can formally convert this to the primitive recursive form.

Looking at it from the bottom up perspective, first define a function:
k1(x1,x2) = Cm( P22(x1,x2) )
Now we have done one composition. Now we write the recursion:
Cm+1(0) = Cm(1)
Cm+1(x+1) = k1( x, Cm+1(x) )

The point being that a function Cm+1 is obtained from Cm by:
-- no more than one composition
-- no more than one recursion

So if we define all p.r. functions at some "depth" in following manner:
-- depth-0 corresponds to initial functions
-- depth-1 corresponds to application of either a composition or a recursion on p.r. functions of depth-0 (also functions already derived at depth-0 aren't included)
......
-- depth-(m+1) corresponds to application of either a composition or a recursion on p.r. functions of depth-m or lower (also functions already derived at depth-m or lower aren't included)

It might be worth pointing out that a single function will likely have different derivations (possibly of varying length). However, the smallest derivation will correspond to the depth of function.

It seems apparently that the variable m in part(c) should correspond with depth defined above (in some way). But I haven't figured out how to make the relation precise. After making that precise the next step would be to do induction.

EDIT: Modified the definition of depth to make it simple. The previous definition was confusing.

Last edited:

SSequence

So continuing from previous post, I think here is the precise claim we would want to prove:
For every p.r. function f : Nn→N (where n ≥ 1) of depth less than or equal to** d the following is true:
f(x1,...,xn) ≤ Cd(M)
for all values x1,....,xn∈N
Also we have:
M= max(x1,....,xn)

I deliberately left out n=0. This part can be proved separately, I guess.

Now to see that this claim necessarily proves part(c) in OP (for n ≥ 1), we need to see that every p.r. function would have some well-defined finite depth (the smallest derivation of that function). We can set the value of d (in the claim above) equal to or greater than the depth (of the function).

Well let's at least cover the base case :P ...
d = 0
We have C0(x) = x+1 (from definition of A)
And well let's just look at all initial functions of one or more arguments. We have:
S(x) = x+1
Pni(x1,...,xn) = xi (projection functions)

We have:
S(x) ≤ C0(x) = C0(M)
The variable M was described above and I will use it the same way through-out.
Secondly for projection functions:
Pni(x1,...,xn) = xi ≤ M < M+1 = C0(M)

This covers the base case. Note that inequalities above are true for all possible input values.

EDIT: Corrected some confusions.

** I was thinking whether writing just "equal to" instead of "less than or equal to" would make the claim any easier to prove. I guess that would be more apparent while trying to work out the inductive case.

Last edited:

SSequence

Now before coming back to the inductive case for the statement in post#17, it seems that few basic facts need to be proved. It would be very unlikely that neither of these would be required for the inductive case in post#17. But nevertheless they have some value by themselves.

Note that (i) uses (ii), and (ii) uses (iii). I will probably have to check for errors but right now I am a little tired to re-check. Probably a lot of errors here. I should have re-checked before posting but anyway....

Now we need to prove these by induction:
(i) A column to the right is strictly greater than a column to the left for all input values. That is for all d∈N:
Cd+1(x) > Cd(x) for all input values

Base case: d=0
Re-writing the basic equations in post#16 here:
Cm+1(0) = Cm(1)
Cm+1(x+1) = Cm( Cm+1(x) )
Put m=0
C1(0) = C0(1)
C1(x+1) = C0( C1(x) )
Re-call that: C0(x) = x+1. So we have:
C1(0) = C0(1) = (1)+1 = 2
C1(x+1) = C0( C1(x) ) = C1(x)+1

C1(x) = 2+x > 1+x = C0(x)
C1(x) > C0(x)
for all x

Inductive step:
We have to assume the proposition for value d and prove it for d+1. The assumption is:
Cd+1(x) > Cd(x) for all input values
To prove:
Cd+2(x) > Cd+1(x) for all input values

Re-writing the basic equations in post#16 here for value d:
Cd+1(0) = Cd(1)
Cd+1(x+1) = Cd( Cd+1(x) )
Writing the same equations for value d+1
Cd+2(0) = Cd+1(1)
Cd+2(x+1) = Cd+1( Cd+2(x) )

Now we have a second induction on say a variable t, we have to prove:
Cd+2(t) > Cd+1(t) for all t
Current Base step: t=0
Cd+2(0) = Cd+1(1) > Cd(1) --- (main inductive assumption used)
Cd+2(0) > Cd(1) = Cd+1(0)
Cd+2(0) > Cd+1(0)

Current Inductive assumption: Cd+2(t) > Cd+1(t)
To prove: Cd+2(t+1) > Cd+1(t+1)

Cd+2(t+1) = Cd+1( Cd+2(t) ) > Cd( Cd+2(t) ) --- (main inductive assumption)
Cd+2(t+1) > Cd( Cd+2(t) )

Now because we have: Cd+2(t) > Cd+1(t) (current inductive assumption) ---- from that it follows using the fact from (ii) that Cd is strictly increasing:
Cd(Cd+2(t)) > Cd(Cd+1(t))
And hence:
Cd+2(t+1) > Cd(Cd+1(t)) = Cd+1(t+1) ---(follows from basic equation)

Cd+2(t+1) > Cd+1(t+1)

(ii) The function Cd : N→N is strictly increasing for all values of d. That is for all d∈N:
Cd(x+1) > Cd(x) for all x∈N

Base case: d=0
We have C0(x) = x+1. So:
C0(x+1) = (x+1)+1 = x+2 > x+1 = C0(x)
C0(x+1) > C0(x)
for all x

Inductive step:
We assume the proposition for d and have to prove it for d+1. So the assumption is:
Cd(x+1) > Cd(x) for all x
To prove:
Cd+1(x+1) > Cd+1(x) for all x

Re-writing the basic equation in post#16 here:
Cd+1(x+1) = Cd( Cd+1(x) ) > Cd+1(x) --- (follows from (iii))
Cd+1(x+1) > Cd+1(x)

Hence the inductive step is shown too.

(iii)
In proving (ii) the following proposition was used. For all d∈N:
Cd(x) > x for all x
Base Case: d=0
C0(x) = x+1 > x for all x

Inductive step:
Assumption: Cd(x) > x for all x
To prove: Cd+1(x) > x for all x

Doing a second induction on say a variable t, we have to prove:
Cd+1(t) > t for all t

Current Base Step: t=0
Cd+1(0) = Cd(1) > 1 ---- (using main inductive assumption)
Cd+1(0) > 1 > 0

Current Inductive Step:
Assumption: Cd+1(t) > t
To prove: Cd+1(t+1) > t+1

Writing the equation:
Cd+1(t+1) = Cd( Cd+1(t) ) = Cd( u ) where u > t (current inductive assumption used)
Now:
Cd( u ) = v where v > u (main inductive assumption used)

Now because v > u > t , we must have v > t+1
Hence we have:
Cd+1(t+1) = v > t+1

SSequence

Well now let's proceed and try to get back to statement in post#17. As already mentioned that statement proves part(c) in OP. But it seems that two changes should be made:
(1) It seems to me that it is better if we include the case n=0 in that statement (as long as we are careful to clarify the meaning).
(2) After doing some rough work, it seems that the statement needs to be revised slightly, C2d(M) should replace Cd(M). I have only worked through composition yet so I don't know whether the statement needs to be changed further for recursion or not.

The base case was already shown for that statement (also in post#17). We want to consider the part of inductive step that relates to composition.
Re-writing:
For every p.r. function f : Nn→N (where n ≥ 0) of depth less than or equal to d the following is true:
f(x1,...,xn) ≤ C2d(M)
for all values x1,....,xn∈N
Also we have:
M= max(x1,....,xn)

What might be noted here is that when n=0, we can define M to be 0 as convention. So for n=0 the inequality specifically becomes:
f( ) ≤ C2d(0)
Also for the base case(d=0), I didn't write the zero function in post#17. We have:
Z( ) ≤ 0 < 1 = C0(0)

Inductive Case (for Composition):
For n≥1
f(x1,...,xn) = g(h1(x1,...,xn),...,hm(x1,...,xn))
As before define:
M= max(x1,....,xn)

Here assume that g, h1, h2,..., hm are all function with depth-d or less. And hence f is a function with depth-d+1 or less.
By inductive assumption:
h1(x1,...,xn) ≤ C2d(M)
h2(x1,...,xn) ≤ C2d(M)
.......
hm(x1,...,xn) ≤ C2d(M)

But now what about:
M1=max(h1(x1,...,xn),h2(x1,...,xn),.....,hm(x1,...,xn))
Due to above inequalities:
M1 ≤ C2d(M)

By inductive assumption again we have:
g(h1(x1,...,xn),...,hm(x1,...,xn)) ≤ C2d(M1)
So we have:
f(x1,...,xn) = g(h1(x1,...,xn),...,hm(x1,...,xn)) ≤ C2d(M1)

Now because M1 ≤ C2d(M) it follows that:
C2d(M1) ≤ C2d(C2d(M)) --- follows from proposition(ii) in post#18
Therefore:
f(x1,...,xn) ≤ C2d(C2d(M))

C2d(C2d(M)) ≤ C2d+2(M)
It seems that we will have to prove this last inequality separately.

So we have:
f(x1,...,xn) ≤ C2d+2(M) = C2(d+1)(M)
Hence the inductive step is complete.

For n=0
f( ) = g(h1( ),...,hm( ))
Once again g, h1, h2,..., hm are all function with depth-d or less. And hence f is a function with depth-d+1 or less.
By inductive assumption:
h1( ) ≤ C2d(0)
h2( ) ≤ C2d(0)
.......
hm( ) ≤ C2d(0)

But now what about:
M1=max(h1( ),h2( ),.....,hm( ))
Due to above inequalities:
M1 ≤ C2d(0)

By inductive assumption again we have:
g(h1( ),...,hm( )) ≤ C2d(M1)
So we have:
f( ) = g(h1( ),...,hm( )) ≤ C2d(M1)

Now because M1 ≤ C2d(0) it follows that:
C2d(M1) ≤ C2d(C2d(0)) --- follows from proposition(ii) in post#18
Therefore:
f( ) ≤ C2d(C2d(0))

C2d(C2d(0)) ≤ C2d+2(0)
It seems that we will have to prove this last inequality separately.

So we have:
f( ) ≤ C2d+2(0) = C2(d+1)(0)
Hence the inductive step is complete as far as composition is concerned. We need to complete the inductive step for recursion.

====================================

Now let's try to show the statement we used in above inductive step for composition. For all d∈N
Cd(Cd(n)) ≤ Cd+2(n) for all n

Base Case:
d=0
C0(C0(n)) ≤ C0+2(n) = C2(n) for all n

We have:
C0(n) = n+1
C2(n) = 2n+3 (highschool problem)

So:
C0(C0(n)) = C0(n+1) = (n+1)+1 = n+2 ≤ 2n+3 = C2(n)

Inductive Step:
Assumption:
Cd(Cd(n)) ≤ Cd+2(n) for all n
To prove: Cd+1(Cd+1(n)) ≤ Cd+3(n) for all n

Now we need a second induction with another variable say t.
To prove: Cd+1(Cd+1(t)) ≤ Cd+3(t) for all t
Second Base Case:
t=0
We have to show: Cd+1(Cd+1(0)) ≤ Cd+3(0)

We have:
Cd+3(0) = Cd+2(1) ---- basic equation
Cd+2(1) = Cd+1(Cd+2(0)) ---- basic equation

Further we have:
Cd+2(0) > Cd+1(0) ---- proposition(i) in post#18
Cd+1(Cd+2(0)) > Cd+1(Cd+1(0)) ---- proposition(ii) in post#18

But we can see that the left-hand-side of this equality is equal to Cd+3(0). Hence:
Cd+3(0) > Cd+1(Cd+1(0))
Cd+3(0) ≥ Cd+1(Cd+1(0))
Cd+1(Cd+1(0)) ≤ Cd+3(0)

Second Inductive Case:
Assumption: Cd+1(Cd+1(t)) ≤ Cd+3(t)
To prove: Cd+1(Cd+1(t+1)) ≤ Cd+3(t+1)

Let's separately look at LHS and RHS of what we have to prove:
Cd+3(t+1) = Cd+2(Cd+3(t) ) ---- basic equation
Now:
Cd+3(t) ≥ Cd+1(Cd+1(t)) ---- current inductive assumption
Therefore:
Cd+2(Cd+3(t) ) ≥ Cd+2( Cd+1(Cd+1(t)) ) ---proposition(ii) in post#18
Because the L.H.S. is equal to Cd+3(t+1) we can write:
Cd+3(t+1) ≥ Cd+2( Cd+1(Cd+1(t)) ) ≥ Cd+1(Cd+1(t)) --- proposition(iii) in post#18
Cd+3(t+1) ≥ Cd+1(Cd+1(t))

Hence the inductive part is complete.

Erland

Science Advisor
Ok I'll check it. But my situation is a bit chaotic right now and might take a few days...

SSequence

Ok I'll check it. But my situation is a bit chaotic right now and might take a few days...
Of course there is no hurry.

Inductive step for recursion:
We need to proceed towards the inductive step for recursion. Writing the equations:
f(x1,...,xn,0) = g(x1,...,xn)
f(x1,...,xn,t+1) = h(x1,...,xn,t,f(x1,...,xn,t))

Let's just take the case n ≥ 1. I think n=0 is going to be very similar (and it will only double the length without adding anything new). Now let:
M=max(x1,...,xn)

The functions g and h have depth d or lower and therefore we have by inductive hypothesis:
g(x1,...,xn) ≤ C2d(M) for all inputs
Now note that because f is suppose to have depth (d+1) or lower. So we have to prove:
f(x1,...,xn,t) ≤ C2(d+1)( max(M,t) ) = C2d+2( max(M,t) ) for all inputs

Now intuitively we want to place some kind of limit on the function f in terms of the column functions. So we need to prove a side claim here.
Side Claim:
[C2d]t+1(M) ≥ f(x1,...,xn,t) for all x1,...,xn,t
[C2d]t+1 means composing t+1 times the function C2d. We want to prove this by induction on t.
Base Case (Side Claim):
t=0
t+1=1
So we have:
[C2d]1 = C2d

We have:
f(x1,...,xn,0) = g(x1,...,xn) ≤ C2d(M)
which proves the base case.
Inductive Step (Side Claim):
Assumption: [C2d]t+1(M) ≥ f(x1,...,xn,t) for all possible input values
To prove: [C2d]t+2(M) ≥ f(x1,...,xn,t+1) for all possible input values

f(x1,...,xn,t+1) = h(x1,...,xn,t,f(x1,...,xn,t))

Now re-call that since h is a function of depth-d by the main induction hypothesis we have:
h(x1,...,xn,t,f(x1,...,xn,t)) ≤ C2d( max(M,t,f(x1,...,xn,t) )
By the induction hypothesis of current side claim we have [C2d]t+1(M) ≥ f(x1,...,xn,t). Therefore, we can re-write above inequality. For example:
max(M, t, f(x1,...,xn,t)) ≤ max(M, t, [C2d]t+1(M)) ≤ [C2d]t+1(M)

brief diversion:
Let's ponder a bit about the movement from second to third inequality. For example we have:
C2d(M) ≥ M ---- proposition(iii) in post#18
C2d(C2d(M)) ≥ C2d(M) --- proposition(iii) in post#18 again
C2d(C2d(C2d(M))) ≥ C2d(C2d(M))
Therefore:
C2d(C2d(C2d(M))) ≥ M

And indeed we can carry over this to value t+1 and conclude:
[C2d]t+1(M) ≥ M

Similarly:
C2d(M) ≥ M
C2d(C2d(M)) ≥ M+1
C2d(C2d(C2d(M))) ≥ M+2
These inequalities follow in line because of constant application of proposition(ii) in post#18 (the columns being strictly increasing). Therefore we will have:
[C2d]t+1(M) ≥ M+t ≥ t
=====
Coming back to the main point we have:

f(x1,...,xn,t+1) = h(x1,...,xn,t,f(x1,...,xn,t)) ≤ C2d( max(M,t,f(x1,...,xn,t) )

max(M, t, f(x1,...,xn,t)) ≤ [C2d]t+1(M)
C2d(max(M, t, f(x1,...,xn,t))) ≤ [C2d]t+2(M) --- using proposition(ii) in post#18

And therefore:
f(x1,...,xn,t+1) ≤ [C2d]t+2(M)

which proves the side claim.

Now let's come all the way back to to main claim (which we had to prove), which was stated towards the start of the post:
f(x1,...,xn,t) ≤ C2d+2( max(M,t) ) for all inputs
But we now we also have our side claim:
[C2d]t+1(M) ≥ f(x1,...,xn,t) for all inputs

So we note that if we can prove:
[C2d]t+1(M) ≤ C2d+2( max(M,t) ) for all x1,...,xn,t
then the main claim stated above follows instantly.

To prove this let's divide it into sub-cases:
(a) t ≥ M
We have max(M,t) = t.
M ≤ t
[C2d]t+1(M) ≤ [C2d]t+1(t) ---proposition(ii) in post#18
[C2d]t+1(t) ≤ C2d+2(t)
it seems we need to prove this last inequality. This is done later in the post.

Going further:
C2d+2(t) = C2d+2( max(M,t) )
Therefore finally following the series of inequalities we have the required claim:
[C2d]t+1(M) ≤ C2d+2( max(M,t) )

(b) t < M

We have max(M,t) = M.
[C2d]t+1(M) ≤ [C2d]M+1(M)
Because we have M+1>t+1, every application of the function C2d will only increase the value (due to proposition(i) post#18).

[C2d]M+1(M) ≤ C2d+2(M)
it seems we need to prove this last inequality. This is done later in the post.

Going further:
C2d+2(M) = C2d+2( max(M,t) )
Therefore finally following the series of inequalities we have the required claim:
[C2d]t+1(M) ≤ C2d+2( max(M,t) )

Hence the induction is complete.

=============================

We used the claim, for all d∈n:
[Cd]n+1(n) ≤ Cd+2(n) for all n

Going with induction on d.
Base Case:
d=0
d+1=1
We have to show:
[C0]n+1(n) ≤ C2(n) for all n
We have:
C0(n) = n+1
C2(n) = 2n+3 (highschool problem)

Therefore:
[C0]1(n) = n+1
[C0]2(n) = (n+1)+1 = n+2
[C0]3(n) = (n+1)+2 = n+3
....
[C0]n+1(n) = n+(n+1) = 2n+1 ≤ 2n+3 ≤ C2(n)

Inductive Case:
Assumption:
[Cd]n+1(n) ≤ Cd+2(n) for all n
To prove: [Cd+1]n+1(n) ≤ Cd+3(n) for all n

Now using second induction on variable say t:
[Cd+1]t+1(t) ≤ Cd+3(t) for all t
Secondary Base Case:
t=0
To prove:
[Cd+1]1(0) ≤ Cd+3(0)
Cd+1(0) ≤ Cd+3(0)
Follows trivially from proposition(i) in post#18.

Secondary Inductive Case:
given:
[Cd+1]t+1(t) ≤ Cd+3(t)
to prove: [Cd+1]t+2(t) ≤ Cd+3(t+1)

Writing the basic equation:
Cd+3(t+1) = Cd+2(Cd+3(t))

We are given:
[Cd+1]t+1(t) ≤ Cd+3(t)
Applying Cd+1 on both sides:
Cd+1( [Cd+1]t+1(t) ) ≤ Cd+1( Cd+3(t) )
[Cd+1]t+2(t) ≤ Cd+1( Cd+3(t) ) ≤ Cd+2(Cd+3(t)) = Cd+3(t+1)
The inequality from second to third expression follows from proposition(i) in post#18. And we also have the desired result.

===================

Coming to part(d) in OP, we have to show that A is not primitive recursive. Suppose there was a function f : N2→N which was equal to A : N2→N. Now suppose the depth of the function f : N2→N is d. Then for column C2d we have:
f(x1,x2) ≤ C2d(M)
where M=max(x1,x2)

But now consider the value of function on f(2d+1,2d+1). We have M=2d+1. From proposition(i) in post#18 we already know that
C2d+1(x) > C2d(x) for all x
in particular we x=2d+1
C2d+1(2d+1) > C2d(2d+1) ≥ f(2d+1,2d+1)
and hence:
C2d+1(2d+1) > f(2d+1,2d+1)
But we already know that columns are just a short-hand for function A. We have:
C2d+1(2d+1) = A(2d+1,2d+1)
Hence:
A(2d+1,2d+1) > f(2d+1,2d+1)

And in a similar manner for all values t ≥ (2d+1) we can show that:
A(t, t) > f(t, t)

======================

Also here is the organisation:
-- post#16 mentions columns and depth of p.r. functions
-- post#17 mentions the result we want to prove (in a slightly incorrect form) and then the base case is considered
-- post#18 three propositions (i), (ii) and (iii) are considered with (i) using (ii) and (ii) using (iii).
-- In post#19 the result we want to prove is stated in the right form (at least I hope so). Then the inductive step is shown for composition. And in the end a proposition is shown which is used in that inductive step.
-- In post#21 the inductive step for recursion is shown. Then a proposition is shown which is used in that inductive step. And then part(d) of OP is considered.

Last edited:

Erland

Science Advisor
#18 looks good. I'll return to the remaning parts later...

Erland

Science Advisor
Very good Ssequence! You will get the credit for all the parts of this challenge. Your solution differs from mine, although they are basically equivalent in some sense, I think. Sorry it took so long for me to check it, but my personal situation is tough right now...

Last edited:

SSequence

Is there a significantly different alternative way?

One thing I wanted to emphasize (which I realised after I wrote the posts regarding the solution) was that the induction I used could be put in much neater form. Unfortunately it was just too difficult to change everything at that point. However, even describing it would certainly be interesting I think.

What I mean is something like this. Suppose you wanted to prove some proposition P( i, j ) for all values of i, j∈N. Instead of using the kind of induction I used, a much simpler way is to define a new predicate Q(t) (where t∈ω2).
So basically now the induction I used can be seen as simply a special case of transfinite induction till ω2. For example:
P( i , j ) iff Q(t)
where we define the correspondence between i,j and t as one of the following (depending on what works one of them can be chosen):
(i) t = ω.i+j
(ii) t = ω.j+i

So instead of having nested induction (and generally a bit of confusion with hypothesis and induction steps etc.), you just have three cases:
-case for t=0
-successor case
-limit case

Want to reply to this thread?

"Math Challenge by Erland #2"

Physics Forums Values

We Value Quality
• Topics based on mainstream science
• Proper English grammar and spelling
We Value Civility
• Positive and compassionate attitudes
• Patience while debating
We Value Productivity
• Disciplined to remain on-topic
• Recognition of own weaknesses
• Solo and co-op problem solving