Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

Challenge Math Challenge by Erland #2

Tags:
  1. Mar 22, 2017 #1
    Submitted and judged by: @Erland
    Solution Credit: @SSequence for 2a, 2B, C, D

    RULES:
    1) In order for a solution to count, a full derivation or proof must be given. Answers with no proof will be ignored.
    2) It is fine to use nontrivial results without proof as long as you cite them and as long as it is "common knowledge to all mathematicians". Whether the latter is satisfied will be decided on a case-by-case basis.
    3) If you have seen the problem before and remember the solution, you cannot participate in the solution to that problem.
    4) You are allowed to use google, wolframalpha or any other resource. However, you are not allowed to search the question directly. So if the question was to solve an integral, you are allowed to obtain numerical answers from software, you are allowed to search for useful integration techniques, but you cannot type in the integral in wolframalpha to see its solution.

    CHALLENGE:
    Definitions:
    The following functions from ##\mathbb N^k## to ##\mathbb N##, with ##k\ge 0##, are called basic functions:

    B1) The zero constant function ##Z:\mathbb N^0\to \mathbb N##, given by [tex]Z()=0[/tex](so ##Z## is actually the constant ##0##).

    B2) For all ##n\ge 0## and ##i## such that ##1\le i\le n##, the i:th n-ary projection function ##P_i^n:\mathbb N^k\to\mathbb N##, given by [tex]P_i^n(x_1, x_2,\dots, x_n)=x_i,[/tex] for all ##x_1,x_2,\dots,x_n\in \mathbb N##.

    B3) The successor function ##S:\mathbb N^1\to \mathbb N##, given by [tex]S(x_1)=x_1+1,[/tex] for all ##x_1\in \mathbb N##.

    We obtain new functions from old ones by applying the following operations:

    O1) Composition: For all ##m,n\ge 0## and all functions ##g:\mathbb N^m\to \mathbb N## and ##h_1,h_2,\dots, h_m:\mathbb N^n\to \mathbb N##, we obtain the function ##f:\mathbb N^n\to \mathbb N##, given by
    [tex]f(x_1,x_2,\dots, x_n)=g(h_1(x_1,x_2,\dots,x_n),h_2(x_1,x_2,\dots,x_n),\dots,h_m(x_1,x_2,\dots,x_n)),[/tex]
    for all ##x_1,x_2,\dots,x_n\in \mathbb N##.

    O2) Primitive recursion: For all ##n\ge 0## and all functions ##g:\mathbb N^n\to\mathbb N## and ##h:\mathbb N^{n+2}\to\mathbb N##, we obtain the (unique) function ##f:\mathbb N^{n+1}\to\mathbb N## which satisfies the following conditions:

    i)
    [tex]f(x_1,x_2,\dots,x_n,0)=g(x_1,x_2,\dots,x_n),[/tex]
    for all ##x_1,x_2,\dots,x_n\in\mathbb N##,

    ii)
    [tex]f(x_1,x_2,\dots,x_n,x_{n+1}+1)=h(x_1,x_2,\dots, x_n, x_{n+1},f(x_1,x_2,\dots,x_n,x_{n+1})),[/tex]
    for all ##x_1,x_2,\dots,x_n,x_{n+1}\in\mathbb N##.

    A function ##f:\mathbb N^n\to \mathbb N##, for any ##n\ge 0##, is called primitive recursive if it can be obtained from the basic functions B1-B3 by finitely many applications of the operations O1 and O2.

    Ackermann's function is the (unique) function ##A:\mathbb N^2\to \mathbb N## which satisfies the following conditions:

    a) ##A(0,n)=n+1##,
    b) ##A(m+1,0)=A(m,1)##,
    c) ##A(m+1,n+1)=A(m,A(m+1,n))##,

    for all ##m,n\in \mathbb N##.

    a) Prove that the factorial function ##n!## is primitive recursive.
    b) Prove that the "truncated subtraction" ##f(a,b) = \max\{a-b,0\}## is primitive recursive.
    c) Prove that to each primitive recursive function ##f:\mathbb N^n\to \mathbb N##, for any ##n\ge 0##, there is an ##m\in\mathbb N## such that
    [tex]f(x_1,x_2,\dots,x_n)\le A(m, \max_{1\le i\le n} x_i),[/tex]
    for all ##x_1,x_2,\dots, x_n\in\mathbb N##.
    d) Prove that ##A## is not primitive recursive.
     
    Last edited: Apr 30, 2017
  2. jcsd
  3. Mar 22, 2017 #2

    Erland

    User Avatar
    Science Advisor

    It should be emphasized that to solve these problems, one needs to establish several auxiliary results. This will take quite a lot of effort. :wink:
     
  4. Mar 23, 2017 #3
    Here is the answer to first two parts. To be honest, the first two parts are quite easy compared to the third and fourth.

    But I think because the solutions I am posting are in a slightly different form from one usually tends to encounter (in form of equations) --- therefore it might be interesting enough to post.
    -- x, x1, x2,...etc. denote input variables
    -- y denotes output variable
    -- all variable values start from 0 by default (also no need to declare etc.)
    -- all variables have local scope (limited to the given function)

    (b)

    function assign // one argument
    y=0
    Loop x
    y=y+1
    End
    End

    function subtract1 // one argument
    t=0
    y=0
    Loop x
    Loop t
    y=y+1
    End
    t=0
    t=t+1​
    End
    End

    function subtract // two arguments
    y=assign(x1)
    Loop x2
    y=subtract1(y)​
    End
    End

    (a)
    function multiply // two arguments
    Loop x1
    Loop x2
    y=y+1
    End
    End
    End

    function factorial // one argument
    y=0

    y=y+1
    Loop x
    y=multiply(x,y)
    x=subtract1(x)
    End
    End

    ===========

    I think I probably (not fully sure) have seen the proof for the third and fourth parts but, as I re-call, I was only able to follow it mechanically without really understanding the intuition behind it. And I think it would be sort of cheating anyway to "look up" a proof and then post it.

    Perhaps it might be a good idea just to consider the part(c) for just simple values such as n=1 and n=2 first I suppose?
     
  5. Mar 23, 2017 #4

    Erland

    User Avatar
    Science Advisor

    Please, try to express your functions in terms of the given basic functions and operations. It is not entirely clear how your programs relate to these.

    c) and d) requires a lot more work, for sure. :smile:
     
  6. Mar 23, 2017 #5
    Well I could just write down the equations but I personally find this to be easier to remember.

    As for your comment, well as an example look at the function "subtract" above. It uses two other functions (and both of those are used at only one place). It is fairly clear that with variable re-naming you can easily convert this to just one main function without using macros (only that would be much less transparent).

    In general, as long as the dependence of function calls (for functions that are used in our main function that is) is acyclic this can easily be shown to be true in general here.

    As for justification for using programs this is already fairly well-known result (equivalence of "programs" and "functions"). In fact here to show that given functions are p.r. actually only reduction from "programs" to "functions" is strictly required (and not the other way around, even though that reduction is correct too).

    EDIT:
    Alright I briefly looked up the "program" to "functions" proof that I read some time ago (about 1.5 years). Your point is fair in the sense that the conversion proof already uses a lot of difficult steps and p.r. functions that are complicated enough (much more so than ones in (a) and (b)).

    I just wanted to emphasize that once you have accepted the equivalence of "functions" and "programs", the approach of using programs is much simpler. This is especially true for functions that are far more complicated than ones in (a) and (b). This would also be true for functions more complicated than ones that are used in the proof of the equivalence (though I have never really seen any one use such p.r. functions).
     
    Last edited: Mar 23, 2017
  7. Mar 23, 2017 #6

    Erland

    User Avatar
    Science Advisor

    Yes, all these results are well known, but I would like a proof based on the formalism given. I don't want to translate between a program and an equation approach, unless you prove some general theorem about the equivalence between these approaches. But this is unnecessary.

    Here is an example of proofs I would prefer, regarding your assign functions:

    For each ##k\ge 0##, the constant with ##0## arguments ##a_k^0##, given by ##a_k^0()=k##, is primitive recursive.
    This is clear for ##k=0##, since ##a_0^0## is the basic function ##Z##.
    For each ##k\ge 0##, if ##a_k^0## is primitive recursive, so is ##a_{k+1}^0##, since it could be obtained by O1 (composition) with ##S## (basic function) as ##g## and ##a_k^0## as ##h_1## (##m=1## and ##n=0## here).
    By induction, ##a_k^0## is primitive recursive for all ##k\ge 0##.

    These are functions with ##0## arguments, i.e. constants. Also, the constant functions with any number of arguments are primitive recursive: ##a_k^m:\mathbb N^m\to \mathbb N## (##m\ge 0##) given by ##a_k^m(x_1,x_2,\dots,x_m)=k##. We see this if we apply O1 with ##a_k^0## as ##g## and ##n=0## (thus there are ##0## ##h##-functions in this case).
    If one is suspicious against this kind of "vacuous" arguments, a proof using O2 (recursion) can be given instead.
     
  8. Mar 23, 2017 #7
    The "assign" function is the identity function f(x)=x.

    Here is function in (b) "subtract" using function equations:
    subtract1(0) = Z()
    subtract1(x+1) = P21( x , subtract1(x) )

    subtract(x1,0) = P11(x1)
    subtract(x1,x2+1) = h1(x1,x2, subtract(x1,x2) )

    Here h1 is the function (using composition):
    h1(x1,x2,x3) = subtract1( P33(x1,x2,x3) )

    here, for example, P21 means n=2 and i=1 (and similar for other projection functions).
     
  9. Mar 23, 2017 #8

    Erland

    User Avatar
    Science Advisor

    That's the way I like it! :)
    Good, you'll get the credit for b).
    You surely can do a) (factorial) also, in a few steps...

    For c) and d), one needs (as far as I know) to prove some general properties of Ackermann's function, and use these.
     
  10. Mar 24, 2017 #9
    Yeah I think I should complete (a) now since I have already written the answer for (b).

    But anyway, probably going to write (c) first on a piece of paper (otherwise I am going to forget the statement) so I can ponder a bit in free/spare time. However, probably I won't be able to solve it, so experts can write a solution for (c) and (d) if they want to.

    Anyway, coming back to (a) first, lets first write down some preliminary functions for factorial. I am going to label all the auxiliary functions h2,h3,h4, etc.

    As a preliminary, first let's make the following two functions (both are going to be required for factorial I think):
    add(x1,x2) = x1 + x2
    multiply(x1,x2) = x1*x2

    We have:
    add(x1,0) = P11(x1)
    add(x1,x2+1) = h2(x1,x2,add(x1,x2))

    where intuitively we have:
    h2(x1,x2,x3) = x3+1
    Here is the equation for h2 (using composition):
    h2(x1,x2,x3) = S(P33(x1,x2,x3))
    S is just the successor function here.

    For multiplication we have:
    multiply(x1,0) = Z( )
    multiply(x1,x2+1) = h3(x1,x2,multiply(x1,x2))

    where intuitively we have:
    h3(x1,x2,x3) = x1 + x3
    So now we have to write the equations for h3:
    h3(x1,x2,0) = P21(x1,x2)
    h3(x1,x2,x3+1) = h4(x1,x2,x3,h3(x1,x2,x3))

    where intuitively we have:
    h4(x1,x2,x3,x4) = x4+1
    Using composition we can write this in required form:
    h4(x1,x2,x3,x4) = S( P44(x1,x2,x3,x4) )


    edit: there might perhaps be minor errors here and there. Going to complete the required function in next post.
     
  11. Mar 24, 2017 #10
    Now we have:
    factorial(0) = one( )
    factorial(x+1) = h5(x,factorial(x))

    here we have two new functions, "one" and "h5". Let's first note that we have (by composition):
    one( ) = S( Z( ) )
    Intuitively h5 is the functions:
    h5(x1,x2) = (x1+1)*x2
    Writing the equations:
    h5(x1,0) = Z( )
    h5(x1,x2+1) = h6(x1,x2,h5(x1,x2))

    where intuitively we have:
    h6(x1,x2,x3) = x1+x3+1
    Writing the equations:
    h6(x1,x2,0) = h7(x1,x2)
    h6(x1,x2,x3+1) = h4(x1,x2,x3,h6(x1,x2,x3))

    The functions h4 was already described in previous post. Let's note that intuitively h7 is the function:
    h7(x1,x2) = x1+1
    Writing the equation for h7(using composition):
    h7(x1,x2) = S( P21(x1,x2) )


    EDIT: So add and multiply functions weren't explicitly used it seems.

    I was thinking that the simplest definition should be:
    factorial(0) = 1
    factorial(x+1) = multiply(S(x),factorial(x))

    and similarly for multiplication:
    multiply(x,0) = 0
    multiply(x, y+1) = add(x,multiply(x,y))

    It seems that even though both definitions don't fit directly into required form for p.r. functions there would probably be results that help us to incorporate these kind of definitions too (but I don't remember anything about it though).
     
    Last edited: Mar 24, 2017
  12. Mar 24, 2017 #11

    Erland

    User Avatar
    Science Advisor

    Very good, Ssequence, the credit for a) is yours.

    A couple of years ago, I used to write definitions of this kind i Maple, and soon get lost by all these auxiliary functions. For no reason at all except fun (:wink:). I can't do this anymore since I don't have a free version of Maple.
     
    Last edited: Mar 24, 2017
  13. Mar 24, 2017 #12

    Erland

    User Avatar
    Science Advisor

    Something like this perhaps:

    add13(x,y,z)=add(P31(x,y,z),P33(x,y,z))

    multiply(x,0)=0
    multiply(x,y+1)=add13(x,y,multiply(x,y))

    S21(x,y)=S(P21(x,y))

    multiaux(x,y)=multiply(S21(x,y),P22(x,y))

    factorial(0)=one()
    factorial(x+1)=multiaux(x,factorial(x))

    Still a lot of auxiliary functions, but the advatage is that these can be defined just by composition.
     
  14. Mar 24, 2017 #13

    Erland

    User Avatar
    Science Advisor

  15. Mar 25, 2017 #14
    A rough idea it seems would probably be to compare the following two:
    (i) The columns of the function A(x,y). Given the answer posted to the highschool question, it seems that the column on the right always dominates (grows faster in eventuality) a column on the left.
    (ii) The set of p.r. functions whose "derivation tree" has length less than or equal to a given finite value.

    If one can establish a link between these two, then the result might be less daunting to prove.
     
  16. Mar 30, 2017 #15

    Erland

    User Avatar
    Science Advisor

    When I proved c) for myself, I used induction on the complexity of the functions: I first proved that the statement is true for the basic functions, then I proved that if the statement holds for the functions a composite function is built up from, then it holds for the composite function too, and also, if it is true for the two functions used in a recursion application, then it also holds for function defined by this recursion.

    Perhaps you could try this approach too...

    And then, it is not hard to prove that d) follows from c).
     
    Last edited: Mar 31, 2017
  17. Mar 31, 2017 #16
    Well I didn't really get a lot of time to work on this problem further after my last post here.

    Looking a bit further, it seems that the variable m in part(c) in the OP should correspond to in "some" sense the depth of derivation tree.

    As I mentioned about columns in the previous post. If we define Cm:N→N:
    Cm(x) = A(m,x)
    Then we have from the definition of A:
    Cm+1(0) = Cm(1)
    Cm+1(x+1) = Cm( Cm+1(x) )

    And we can formally convert this to the primitive recursive form.

    Looking at it from the bottom up perspective, first define a function:
    k1(x1,x2) = Cm( P22(x1,x2) )
    Now we have done one composition. Now we write the recursion:
    Cm+1(0) = Cm(1)
    Cm+1(x+1) = k1( x, Cm+1(x) )

    The point being that a function Cm+1 is obtained from Cm by:
    -- no more than one composition
    -- no more than one recursion

    So if we define all p.r. functions at some "depth" in following manner:
    -- depth-0 corresponds to initial functions
    -- depth-1 corresponds to application of either a composition or a recursion on p.r. functions of depth-0 (also functions already derived at depth-0 aren't included)
    ......
    -- depth-(m+1) corresponds to application of either a composition or a recursion on p.r. functions of depth-m or lower (also functions already derived at depth-m or lower aren't included)

    It might be worth pointing out that a single function will likely have different derivations (possibly of varying length). However, the smallest derivation will correspond to the depth of function.

    It seems apparently that the variable m in part(c) should correspond with depth defined above (in some way). But I haven't figured out how to make the relation precise. After making that precise the next step would be to do induction.

    EDIT: Modified the definition of depth to make it simple. The previous definition was confusing.
     
    Last edited: Mar 31, 2017
  18. Mar 31, 2017 #17
    So continuing from previous post, I think here is the precise claim we would want to prove:
    For every p.r. function f : Nn→N (where n ≥ 1) of depth less than or equal to** d the following is true:
    f(x1,...,xn) ≤ Cd(M)
    for all values x1,....,xn∈N
    Also we have:
    M= max(x1,....,xn)

    I deliberately left out n=0. This part can be proved separately, I guess.

    Now to see that this claim necessarily proves part(c) in OP (for n ≥ 1), we need to see that every p.r. function would have some well-defined finite depth (the smallest derivation of that function). We can set the value of d (in the claim above) equal to or greater than the depth (of the function).

    Well let's at least cover the base case :P ...
    d = 0
    We have C0(x) = x+1 (from definition of A)
    And well let's just look at all initial functions of one or more arguments. We have:
    S(x) = x+1
    Pni(x1,...,xn) = xi (projection functions)

    We have:
    S(x) ≤ C0(x) = C0(M)
    The variable M was described above and I will use it the same way through-out.
    Secondly for projection functions:
    Pni(x1,...,xn) = xi ≤ M < M+1 = C0(M)

    This covers the base case. Note that inequalities above are true for all possible input values.


    EDIT: Corrected some confusions.

    ** I was thinking whether writing just "equal to" instead of "less than or equal to" would make the claim any easier to prove. I guess that would be more apparent while trying to work out the inductive case.
     
    Last edited: Mar 31, 2017
  19. Mar 31, 2017 #18
    Now before coming back to the inductive case for the statement in post#17, it seems that few basic facts need to be proved. It would be very unlikely that neither of these would be required for the inductive case in post#17. But nevertheless they have some value by themselves.

    Note that (i) uses (ii), and (ii) uses (iii). I will probably have to check for errors but right now I am a little tired to re-check. Probably a lot of errors here. I should have re-checked before posting but anyway....

    Now we need to prove these by induction:
    (i) A column to the right is strictly greater than a column to the left for all input values. That is for all d∈N:
    Cd+1(x) > Cd(x) for all input values

    Base case: d=0
    Re-writing the basic equations in post#16 here:
    Cm+1(0) = Cm(1)
    Cm+1(x+1) = Cm( Cm+1(x) )
    Put m=0
    C1(0) = C0(1)
    C1(x+1) = C0( C1(x) )
    Re-call that: C0(x) = x+1. So we have:
    C1(0) = C0(1) = (1)+1 = 2
    C1(x+1) = C0( C1(x) ) = C1(x)+1

    C1(x) = 2+x > 1+x = C0(x)
    C1(x) > C0(x)
    for all x

    Inductive step:
    We have to assume the proposition for value d and prove it for d+1. The assumption is:
    Cd+1(x) > Cd(x) for all input values
    To prove:
    Cd+2(x) > Cd+1(x) for all input values

    Re-writing the basic equations in post#16 here for value d:
    Cd+1(0) = Cd(1)
    Cd+1(x+1) = Cd( Cd+1(x) )
    Writing the same equations for value d+1
    Cd+2(0) = Cd+1(1)
    Cd+2(x+1) = Cd+1( Cd+2(x) )

    Now we have a second induction on say a variable t, we have to prove:
    Cd+2(t) > Cd+1(t) for all t
    Current Base step: t=0
    Cd+2(0) = Cd+1(1) > Cd(1) --- (main inductive assumption used)
    Cd+2(0) > Cd(1) = Cd+1(0)
    Cd+2(0) > Cd+1(0)

    Current Inductive assumption: Cd+2(t) > Cd+1(t)
    To prove: Cd+2(t+1) > Cd+1(t+1)

    Cd+2(t+1) = Cd+1( Cd+2(t) ) > Cd( Cd+2(t) ) --- (main inductive assumption)
    Cd+2(t+1) > Cd( Cd+2(t) )

    Now because we have: Cd+2(t) > Cd+1(t) (current inductive assumption) ---- from that it follows using the fact from (ii) that Cd is strictly increasing:
    Cd(Cd+2(t)) > Cd(Cd+1(t))
    And hence:
    Cd+2(t+1) > Cd(Cd+1(t)) = Cd+1(t+1) ---(follows from basic equation)

    Cd+2(t+1) > Cd+1(t+1)


    (ii) The function Cd : N→N is strictly increasing for all values of d. That is for all d∈N:
    Cd(x+1) > Cd(x) for all x∈N

    Base case: d=0
    We have C0(x) = x+1. So:
    C0(x+1) = (x+1)+1 = x+2 > x+1 = C0(x)
    C0(x+1) > C0(x)
    for all x

    Inductive step:
    We assume the proposition for d and have to prove it for d+1. So the assumption is:
    Cd(x+1) > Cd(x) for all x
    To prove:
    Cd+1(x+1) > Cd+1(x) for all x

    Re-writing the basic equation in post#16 here:
    Cd+1(x+1) = Cd( Cd+1(x) ) > Cd+1(x) --- (follows from (iii))
    Cd+1(x+1) > Cd+1(x)

    Hence the inductive step is shown too.

    (iii)
    In proving (ii) the following proposition was used. For all d∈N:
    Cd(x) > x for all x
    Base Case: d=0
    C0(x) = x+1 > x for all x

    Inductive step:
    Assumption: Cd(x) > x for all x
    To prove: Cd+1(x) > x for all x

    Doing a second induction on say a variable t, we have to prove:
    Cd+1(t) > t for all t

    Current Base Step: t=0
    Cd+1(0) = Cd(1) > 1 ---- (using main inductive assumption)
    Cd+1(0) > 1 > 0

    Current Inductive Step:
    Assumption: Cd+1(t) > t
    To prove: Cd+1(t+1) > t+1

    Writing the equation:
    Cd+1(t+1) = Cd( Cd+1(t) ) = Cd( u ) where u > t (current inductive assumption used)
    Now:
    Cd( u ) = v where v > u (main inductive assumption used)

    Now because v > u > t , we must have v > t+1
    Hence we have:
    Cd+1(t+1) = v > t+1
     
  20. Apr 1, 2017 #19
    Well now let's proceed and try to get back to statement in post#17. As already mentioned that statement proves part(c) in OP. But it seems that two changes should be made:
    (1) It seems to me that it is better if we include the case n=0 in that statement (as long as we are careful to clarify the meaning).
    (2) After doing some rough work, it seems that the statement needs to be revised slightly, C2d(M) should replace Cd(M). I have only worked through composition yet so I don't know whether the statement needs to be changed further for recursion or not.

    The base case was already shown for that statement (also in post#17). We want to consider the part of inductive step that relates to composition.
    Re-writing:
    For every p.r. function f : Nn→N (where n ≥ 0) of depth less than or equal to d the following is true:
    f(x1,...,xn) ≤ C2d(M)
    for all values x1,....,xn∈N
    Also we have:
    M= max(x1,....,xn)

    What might be noted here is that when n=0, we can define M to be 0 as convention. So for n=0 the inequality specifically becomes:
    f( ) ≤ C2d(0)
    Also for the base case(d=0), I didn't write the zero function in post#17. We have:
    Z( ) ≤ 0 < 1 = C0(0)

    Inductive Case (for Composition):
    For n≥1
    f(x1,...,xn) = g(h1(x1,...,xn),...,hm(x1,...,xn))
    As before define:
    M= max(x1,....,xn)

    Here assume that g, h1, h2,..., hm are all function with depth-d or less. And hence f is a function with depth-d+1 or less.
    By inductive assumption:
    h1(x1,...,xn) ≤ C2d(M)
    h2(x1,...,xn) ≤ C2d(M)
    .......
    hm(x1,...,xn) ≤ C2d(M)

    But now what about:
    M1=max(h1(x1,...,xn),h2(x1,...,xn),.....,hm(x1,...,xn))
    Due to above inequalities:
    M1 ≤ C2d(M)

    By inductive assumption again we have:
    g(h1(x1,...,xn),...,hm(x1,...,xn)) ≤ C2d(M1)
    So we have:
    f(x1,...,xn) = g(h1(x1,...,xn),...,hm(x1,...,xn)) ≤ C2d(M1)

    Now because M1 ≤ C2d(M) it follows that:
    C2d(M1) ≤ C2d(C2d(M)) --- follows from proposition(ii) in post#18
    Therefore:
    f(x1,...,xn) ≤ C2d(C2d(M))

    C2d(C2d(M)) ≤ C2d+2(M)
    It seems that we will have to prove this last inequality separately.

    So we have:
    f(x1,...,xn) ≤ C2d+2(M) = C2(d+1)(M)
    Hence the inductive step is complete.

    For n=0
    f( ) = g(h1( ),...,hm( ))
    Once again g, h1, h2,..., hm are all function with depth-d or less. And hence f is a function with depth-d+1 or less.
    By inductive assumption:
    h1( ) ≤ C2d(0)
    h2( ) ≤ C2d(0)
    .......
    hm( ) ≤ C2d(0)

    But now what about:
    M1=max(h1( ),h2( ),.....,hm( ))
    Due to above inequalities:
    M1 ≤ C2d(0)

    By inductive assumption again we have:
    g(h1( ),...,hm( )) ≤ C2d(M1)
    So we have:
    f( ) = g(h1( ),...,hm( )) ≤ C2d(M1)

    Now because M1 ≤ C2d(0) it follows that:
    C2d(M1) ≤ C2d(C2d(0)) --- follows from proposition(ii) in post#18
    Therefore:
    f( ) ≤ C2d(C2d(0))

    C2d(C2d(0)) ≤ C2d+2(0)
    It seems that we will have to prove this last inequality separately.

    So we have:
    f( ) ≤ C2d+2(0) = C2(d+1)(0)
    Hence the inductive step is complete as far as composition is concerned. We need to complete the inductive step for recursion.

    ====================================

    Now let's try to show the statement we used in above inductive step for composition. For all d∈N
    Cd(Cd(n)) ≤ Cd+2(n) for all n

    Base Case:
    d=0
    C0(C0(n)) ≤ C0+2(n) = C2(n) for all n

    We have:
    C0(n) = n+1
    C2(n) = 2n+3 (highschool problem)

    So:
    C0(C0(n)) = C0(n+1) = (n+1)+1 = n+2 ≤ 2n+3 = C2(n)

    Inductive Step:
    Assumption:
    Cd(Cd(n)) ≤ Cd+2(n) for all n
    To prove: Cd+1(Cd+1(n)) ≤ Cd+3(n) for all n

    Now we need a second induction with another variable say t.
    To prove: Cd+1(Cd+1(t)) ≤ Cd+3(t) for all t
    Second Base Case:
    t=0
    We have to show: Cd+1(Cd+1(0)) ≤ Cd+3(0)

    We have:
    Cd+3(0) = Cd+2(1) ---- basic equation
    Cd+2(1) = Cd+1(Cd+2(0)) ---- basic equation

    Further we have:
    Cd+2(0) > Cd+1(0) ---- proposition(i) in post#18
    Cd+1(Cd+2(0)) > Cd+1(Cd+1(0)) ---- proposition(ii) in post#18

    But we can see that the left-hand-side of this equality is equal to Cd+3(0). Hence:
    Cd+3(0) > Cd+1(Cd+1(0))
    Cd+3(0) ≥ Cd+1(Cd+1(0))
    Cd+1(Cd+1(0)) ≤ Cd+3(0)

    Second Inductive Case:
    Assumption: Cd+1(Cd+1(t)) ≤ Cd+3(t)
    To prove: Cd+1(Cd+1(t+1)) ≤ Cd+3(t+1)

    Let's separately look at LHS and RHS of what we have to prove:
    Cd+3(t+1) = Cd+2(Cd+3(t) ) ---- basic equation
    Now:
    Cd+3(t) ≥ Cd+1(Cd+1(t)) ---- current inductive assumption
    Therefore:
    Cd+2(Cd+3(t) ) ≥ Cd+2( Cd+1(Cd+1(t)) ) ---proposition(ii) in post#18
    Because the L.H.S. is equal to Cd+3(t+1) we can write:
    Cd+3(t+1) ≥ Cd+2( Cd+1(Cd+1(t)) ) ≥ Cd+1(Cd+1(t)) --- proposition(iii) in post#18
    Cd+3(t+1) ≥ Cd+1(Cd+1(t))

    Hence the inductive part is complete.
     
  21. Apr 1, 2017 #20

    Erland

    User Avatar
    Science Advisor

    Ok I'll check it. But my situation is a bit chaotic right now and might take a few days...
     
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook

Have something to add?
Draft saved Draft deleted



Similar Discussions: Math Challenge by Erland #2
Loading...