1. Limited time only! Sign up for a free 30min personal tutor trial with Chegg Tutors
    Dismiss Notice
Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

Half iterate of 2^x

  1. Oct 28, 2003 #1
    what f satisfy the following equation: f(f(x)) = 2^x?
     
  2. jcsd
  3. Oct 28, 2003 #2

    Hurkyl

    User Avatar
    Staff Emeritus
    Science Advisor
    Gold Member

    Is x any real number? Do you want f to be continuous, differentiable, or something else?
     
  4. Oct 28, 2003 #3
    x is real. ideally, f is differentiable. i think i may have something. let W(x) be the inverse function for x exp(x). i think
    f(x)=exp(W( log(2) x^2 )) might be what i'm looking for... checking it now. oh wait, W is defined only for x>=-1, so let's say that the domain is [-1,oo).

    my ultimate goal is to see if i can find a set B such that A<B<2^A for an infinite set A. supposing my f is right, i just have to figure out if exp(W( log(2) A^2 )) can somehow be translated into cardinal arithmetic/infinite sets.
     
    Last edited: Oct 28, 2003
  5. Oct 28, 2003 #4

    Hurkyl

    User Avatar
    Staff Emeritus
    Science Advisor
    Gold Member

    I did a quick check in Mathematica, unless I did something wrong that doesn't seem to work.
     
  6. Oct 28, 2003 #5
    right. for that f, f(f(0)) != 1 = 2^0...

    what's kinda confusing me is that if i assume that f(x)=a^x, where a itself may be a function of x, and i tell mathematica to solve
    f(f(x))=2^x, this is the solution it gives but it doesn't work.

    what's wrong with the following solution?
    we want to solve for a so that a^(a^x)=2^x. if we're assuming f(x)=a^x, then this means that f(f(x))=2^x. actually, we're more interested in finding a^x since this is what f(x) equals.

    log both sides to get
    (a^x)log(a) = xlog(2)

    rewrite a^x as exp(log(a^x)) to get
    ( exp(log(a^x)) )log(a) = xlog(2)

    multiply both sides by x and use xlog(a)=log(a^x) to get
    ( exp(log(a^x)) )log(a^x) = (x^2)log(2)

    use W function to isolate log(a^x):
    log(a^x)=W( (x^2)log(2) )

    f(x)=a^x=exp(W( (x^2)log(2) )).
    ***
    i think the basic problem is that since a is not a constant, f(f(x)) != a^(a^x). so, if f(x)=a(x)^x, then f(f(x)) would be
    a(a(x)^x)^(a(x)^x), not a(x)^(a(x)^x).
     
    Last edited: Oct 28, 2003
  7. Oct 28, 2003 #6

    Hurkyl

    User Avatar
    Staff Emeritus
    Science Advisor
    Gold Member

    Yah, my gut tells me right off the bat that a^a^x cannot be a half-iterate of 2^x.


    Let g(x) = 2^x

    x = 2 is a fixed point of g(x), so it makes sense to define
    f(2) := 2

    To preserve the fixed point property.


    Oh, how about differentiation? Maybe that will help us!

    f(f(x)) = g(x)

    f'(f(x)) f'(x) = g'(x) = 2^x ln 2

    Hrm, it's nonobvious this gets us anywhere. Anyways, if we plug in 2 for x and use our fixed point, we get

    f'(2)^2 = 2^2 ln 2
    so f'(2) = 2 (ln 2)^(1/2)

    I wonder if we can keep going? Let's differentiate again.

    2^x (ln 2)^2 = g''(x) = f''(x) f'(f(x)) + f'(x))^2 f''(f(x))
    and plugging in x = 2 gives us

    4 (ln 2)^2 = f''(2) f'(2) + f'(2)^2 f''(2)

    f''(2) = 4(ln 2)^2 / f'(2) / (1 + f'(2))


    So we can generate Taylor polynomials for f(x)... I dunno if they will converge or how useful they will be, however.
     
  8. Oct 28, 2003 #7
    actually, 2 is not a fixed point of g(x)=2^x. the idea of using taylor series is something i'm trying to avoid if possible. another approach is to use neural networks to approximate f. in this case, a nn with two hidden layers with corresponding values locked to each other would basically produce f. i've also found that f is what is called a half exponential and some dude named Szekeres already talked about it. looking for Szekeres' articles.
     
  9. Oct 28, 2003 #8

    Hurkyl

    User Avatar
    Staff Emeritus
    Science Advisor
    Gold Member

    By "2" I mean an actual fixed point of g(x). :smile:

    It's complex, though, so the computations for the Taylor coefficients isn't as straightforward as I thought...
     
    Last edited: Oct 28, 2003
  10. Oct 28, 2003 #9

    Hurkyl

    User Avatar
    Staff Emeritus
    Science Advisor
    Gold Member

    All right; I think I know a procedure that can generate any strictly increasing continuous function f satisfying f(f(x)) = 2^x.


    First, let's extend the domain to include -&infin;; so, of course, 2^(&infin;) = 0.

    Choose any (finite) negative number b.

    Assign any values you like to f(x) over the range [-&infin;, b] satisfying f(-&infin;) = b, f(b) = 0, and f continuous on [-&infin;, b].

    Now, use f(f(x)) = 2^x as a recurrence relation to generate the rest of the values!

    E.G.

    For any &xi; in [-&infin;, b], set f(f(&xi;)) = 2^&xi;. This assigns values to f(x) in the range [b, 0].

    For any &xi; in [b, 0], set f(f(&xi;)) = 2^&xi;. This assigns values to f(x) in the range [0, 2^b]

    For any &xi; in [0, 2^b], set f(f(&xi;)) = 2^&xi;. This assigns values to f(x) in the range [2^b, 1].

    et cetera

    It's clear that this recursion defines f to be strictly increasing and continuous, and f(f(x)) = 2^x. It seems obvious that any such f can be defined in this way, but it's too late at night to give a proof. :smile:
     
  11. Oct 29, 2003 #10
    i'd like f to be as smooth as possible.

    this is related to the following:
    find function A such that A(2^x)=A(x)+1.
    then f(x)=Ainv(A(x)+1/2). this is a special case of abel's equation.

    by A(-oo), i mean, of course, lim_{x-> -oo} A(x).
    let's pretend that A(-oo)=s.

    A(2^(-oo))=A(-oo)+1 yields A(0)=s+1.

    A(2^0)=A(0)+1 yields A(1)=s+2.

    A(2^1)=A(1)+1 yields A(2)=s+3.

    A(2^2)=A(2)+1 yields A(4)=s+4.

    A(2^3)=A(3)+1 yields A(8)=s+5.

    A(2^4)=A(4)+1 yields A(16)=s+6.

    a picture of this data is located here:
    http://www.alephnulldimension.net/phoenix/pics/pics7/abel.JPG
    the hope is to find an A so that the line (x=s) above the x-axis is an asymptote as x-> -oo and that A passes through all the dots.

    it looks like A(x)=s+log_2(x)+2 for x>0. let's stick to x>0 for the moment.

    Ainv(x)=2^(x-s-2) and
    Ainv(A(x)+1/2)=2^((A(x)+1/2)-s-2)=2^(log_2(x)+1/2).
    thus, a candidate for f(x) is 2^(log_2(x)+1/2) = SQRT(2)x. of course, it couldn't be this easy: f(f(x))=2x, not 2^x.

    if A(x) satisfies A(2^x)=A(x)+1, then so does the function A(x)+c for any constant c. so let's modify the above found A so that A(x)=log_2(x)+c (for x>0). one way to go is to see if we let
    f(x)=Ainv(A(x)+1/2), to see if f(f(x))=2^x for any c.

    to make a long story short, Ainv(A(x)+1/2)=SQRT(2)x again.

    perhaps the problem lies in the fact that if A(x)=log_2(x)+c, A has a singularity at 0 and A(-oo) is not finite.

    does a smooth function A exist, defined on R, such that A(-oo) exists and is finite, say equal to s, and A(x)=s+log_2(x)+2 for numbers x that are positive integral powers of 2? is there some kind of canonical solution? if not, how about finding such an A definied on the largest set containing (0,oo). i think what i'm going to to next is try to investigate A(x) for some rational values of x...

    in general, A(2^(p/q))=A(p/q)+1, so A(p/q)=A(2^(p/q))-1. this gives some relation for rational inputs of A. if i could find a value of A(1/2) depending only on s and not any other values of A, i would be better off.

    we also have that A(-x)=A(2^(-x))-1. if
    A(x)=log_2(x)+s+2 for x>0, then for x<0, we have
    A(x)=x+s+1. but this would mean that A(-oo)=-oo != s. hmm...

    oh, i guess that there's no way A(x)=log_2(x)+s+2 has much hope of working since it doesn't satisfy A(2^x)=A(x)+1.

    ok, so log_2(x)+s+2 is ruled out. anything else that's smooth satisfying all the above conditions?

    another thought. above, we have relations on A(2^-x) where x>=0 so that A(2^-x)=A(-x)+1. if A is known on the domain D={1/2, 1/4, 1/8, ...} somehow and if it coincided with some holomorphic function on a region G containing 0 and D then A would equal that function on G by the identity theorem (if A is holomorphic). it might help prove that A can't be holomorphic at least on all of G.
     
    Last edited: Oct 29, 2003
  12. Oct 30, 2003 #11

    NateTG

    User Avatar
    Science Advisor
    Homework Helper

    For cardinal arithmetic, finding a set B so that
    A < B < 2^A
    with strict <'s (no equality) and arbitrary A is stronger than disproving the continuum hypothesis and hence proven impossible. (Sorry, no reference links.)

    Consider that 2^N=R so you would have
    N < B < R

    (Disproving the continuum hypothesis.)

    So, unless you add some unusual axioms, that's a dead end.

    P.S.

    g(2)=2^2=4 so 2 is not a fixed point. I don't think that 2^x has a fixed point in the reals.
     
  13. Oct 30, 2003 #12
    one wonders whether this means that such an f doesn't exist, if f isn't applicable to sets, or if f is not strictly increasing on sets.
     
  14. Oct 30, 2003 #13

    Hurkyl

    User Avatar
    Staff Emeritus
    Science Advisor
    Gold Member

    I think you made some mistakes, phoenixthoth.

    Suppose we have A(-&infin;) = s.

    Then
    A(0) = A(2^-&infin;) = s + 1
    A(1) = A(2^0) = A(0) + 1 = s + 2
    A(2) = A(2^1) = A(1) + 1 = s + 3
    A(4) = A(2^2) = A(2) + 1 = s + 4
    A(8) = A(2^3) = A(3) + 1 = ?

    We have no idea what A(3) is, so we can't know A(8). However, we have:

    A(16) = A(2^4) = A(4) + 1 = s + 5
    A(65536) = A(2^16) = A(16) + 1 = s + 6
    A(2^65536) = s + 7
    ...

    And no, A(x) = s + lg lg x + 3 doesn't work either. (lg := log base 2)


    A and f sure seem to interact nicely... there's the general identity:

    A(f(n)(x)) = A(x) + n/2

    where f(n) means you apply f n times.


    We can play the same kind of trick with A as we did with f:

    Choose any values you want for A on the range [-&infin;, 0] with A(0) = A(-&infin) + 1

    Define A on the range [0, 1] by the recursion A(x) = A(lg x) + 1
    Define A on the range [1, 2] by the recursion
    Define A on the range [2, 4] by the recursion
    Define A on the range [4, 8] by the recursion
    et cetera


    In fact, we can do the following.

    Choose your favorite interval of the form [s, 2s).
    Define A(x) on the interval [s, 2s) to be anything you like.
    Extend A(x) to the rest of the real numbers (and -&infin; if you like) via the recursions A(2^x) = A(x) + 1 and A(x) = A(lg x) + 1.


    As for smoothness, I will be impressed if you can prove the existance of an analytic solution for A, but I think you can get A to be infinitely differentiable everywhere.
     
  15. Oct 31, 2003 #14
    oops! yeah, the second to the last line is incorrect. as you said, it should be A(16)=A(2^4)=A(4)+1=s+5. then the last line should be, as you said, A(65536)=A(2^16)=A(16)+1=s+7.

    if g(x)=2^x and g^n(x) is the n-th iterate of g, since A(g(x))=A(x)+1, it looks like we have A(g^(n+1)(x))=A(g^n(x))+1 upon replacement of x by g^n(x). then for n>=0, we have A(g^n(x))=n+A(x) (thus the above lines generalize to A(g^n(0))=n+s+1). then if n=m/2, we have what you said, that A(f^m(x))=m/2+A(x). this is a more general version of something i said earlier, that f(x) would be Ainv(1/2+A(x)).

    it doesn't appear to me that looking at abel's equation makes things any easier. it's also conceivable that it's possible to find f even if A(2^x)=1+A(x) has no solution (although it does appear to have *a* solution).

    so let's return to f(f(x))=2^x for the moment. for x=-oo, we get f(f(-oo))=0. at this point, it's conceivable that |f(-oo)|=oo. let's take the derivative (now assuming f is differentiable) to get
    f'(f(x))f'(x)=(2^x)log(2). letting x=-oo (so to speak) again, we get f'(f(-oo))f'(-oo)=0 which means that f'(f(-oo))=0 or f'(-oo)=0. i don't have a proof for this, but i don't think it's possible to have all three of the following:
    1. |f(-oo)|=oo
    2. f'(f(-oo))=0
    3. f'(-oo)=0.

    matters would be easier (though of course that doesn't make it so) if f(-oo)=t for some finite t and f is strictly increasing. that being the case, f(x)>0 for all x and it cannot be the case that
    f'(f(-oo))=f'(t)=0, so then f'(-oo)=0 which makes sense if y=t is an asymptote. now this doesn't prove f has an asymptote since this is a circular argument starting with the assumption that f has an asymptote. but let's see what happens with the assumption f(-oo)=t. (i think this leads into the analysis you gave earlier.)

    let's stick to the notation g(x)=2^x and g^n is the n-th iterate of g. keep in mind that since f is increasing, t<0.

    here's some data (one "for instance" is t=-1):
    f(-oo):=t
    f(t)=f(f(-oo))=g(-oo)=0
    f(0)=f(f(t))=g(t)=2^t
    f(g(t))=f(g(t))=f(f(0))=g(0)=1
    f(g(0))=f(1)=f(g(0))=f(f(f(0)))=g(f(0))=g(2^t)=g^2(t)
    f(g^2(t))=f(f(1))=g(1)=2=g^2(0)
    f(g^2(0))=f(2)=f(f(f(1)))=g(f(1))=g(g^2(t))=g^3(t)
    f(g^3(t))=f(f(2))=g(2)=4=g^3(0)
    f(g^3(0))=f(f(g^3(t)))=g^4(t)

    it looks like we have the following pattern:
    f(g^n(0))=g^(n+1)(t)
    f(g^n(t))=g^n(0).

    we've got an infinite amount of data about f now and perhaps some curve-fit search is in order. perhaps i'll start with t=-1 and see what happens. perhaps each t<0 gives rise to another solution f or perhaps there is only one t for which there is an f or perhaps there is no f for any t. i'm wondering if f retains its arbitaryness you speak of if the condition that f is differentiable is kept.
     
    Last edited: Oct 31, 2003
  16. Nov 1, 2003 #15
    here's a relevant article by kevin iga:

    it's also true that f is a function that commutes with g. are there any nice/helpful characterizations of functions that commute with g(x)=2^x?
     
    Last edited by a moderator: Mar 4, 2015
  17. Nov 23, 2003 #16
    "progress"

    a series for [tex]g\left( x\right) =2^{x}[/tex] centered at zero is [tex]g_{n}\left( x\right) =\sum_{k=0}^{n}\frac{\left( \log 2\right) ^{k}}{k!}x^{k}[/tex].

    if i let [tex]f_{3}(x)=\sum_{k=0}^{3}a_{k}x^{k}=\sum_{k=0}^{3}\frac{b_{k}}{k!}x^{k}[/tex], compute [tex]f_{3}\left( f_{3}\left( x \right) \right) [/tex], ignore terms of order higher than 3, and equate to [tex]g_{3}[/tex], i get the following approximations:

    [tex]\begin{tabular}{ccc}
    $k$ & $a_{k}$ & $b_{k}$ \\
    0 & 0.544772 & 0.544772 \\
    1 & 0.748014 & 0.748014 \\
    2 & 0.154587 & 0.309174 \\
    3 & 0.0114634 & 0.0687804
    \end{tabular}[/tex]

    btw, [tex]a_{0}\approx \frac{1}{1+\sqrt{\log 2}}[/tex]. the relative difference between [tex]f_{3}\circ f_{3}[/tex] and [tex]g[/tex] in [tex]\left[ -2.5,6.6\right] [/tex] is under about 5%. since [tex]f\left( f\left( -\infty \right) \right) =0[/tex], [tex]
    f_{3}\left( -\infty \right) \approx f_{3}^{-1}\left( 0\right) \approx
    -0.876858[/tex]. since this value is in [tex]\left[ -2.5,6.6\right] [/tex], it is probably close. this suggests the range of [tex]f[/tex] is something like [tex]\left( -0.876858,\infty \right) [/tex]. i also know that [tex]x<f\left( x\right) <g\left( x\right) [/tex] for all [tex]x[/tex]. [tex]g_{3}\left( x\right) \approx
    1+0.693147x+0.240227x^{2}+0.0555041x^{3}[/tex] and [tex]f_{3}\left( f_{3}\left( x\right) \right) \approx 1+0.693147x+0.240227x^{2}+0.0555041x^{3}[/tex].

    abel's equation. if [tex]A[/tex] is a function satisfying [tex]g\left( x\right) =A^{-1}\left( A\left( x\right) +1\right) [/tex] and [tex]f\left( x\right) =A^{-1}\left( A\left( x\right) +\frac{1}{2}\right) [/tex], then [tex]f\circ f=g[/tex]. if [tex]A[/tex] satisfies this version of abel's equation ([tex]g\left( x\right)
    =A^{-1}\left( A\left( x\right) +1\right) [/tex]), then so does [tex]A+c[/tex] for any constant [tex]c[/tex]. therefore, without loss of generality, we can assume that [tex]A\left( 0\right) =0[/tex]. then if we assume [tex]A_{n}\left( x\right) =\sum_{k=0}^{n}a_{k}x^{k}=\sum_{k=0}^{n}\frac{b_{k}}{k!}x^{k}[/tex] and say [tex]A_{6}^{-1}\circ \left( A_{6}+1\right) =g_{6}[/tex], i get the following
    approximations for the [tex]a_{k}[/tex] and [tex]b_{k}[/tex]:

    [tex]\begin{tabular}{ccc}
    $k$ & $a_{k}$ & $b_{k}$ \\
    0 & 0 & 0 \\
    1 & 0.783476 & 0.783476 \\
    2 & 0.261744 & 0.523487 \\
    3 & -0.0122861 & -0.0737167 \\
    4 & -0.0491509 & -1.17962 \\
    5 & 0.0175623 & 2.10748 \\
    6 & 0.0313423 & 22.5665
    \end{tabular}
    [/tex]
    [tex]A_{6}^{-1}\left( x\right) \approx
    1.27636x-0.54425x^{2}+0.496751x^{3}-0.397809x^{4}+0.215844x^{5}-0.0468988x^{6} [/tex] and [tex]A_{6}^{-1}\left( A_{6}\left( x\right) +1\right) \approx 1+0.693147x+0.240227x^{2}+0.0555041x^{3}+0.00961812x^{4}+0.00133335x^{5}[/tex] while [tex]g_{5}\left( x\right) \approx 1+0.693147x+0.240227x^{2}+0.0555041x^{3}+0.00961813x^{4}+0.00133336x^{5}[/tex]. [tex]A_{6}^{-1}\left( A_{6}\left( x\right) +\frac{1}{2}\right) \approx 0.545362+0.755607x+0.148081x^{2}-0.0221803x^{3}-0.00908802x^{4}+0.0532461x^{5} [/tex]. note the comparison to [tex]f_{3}[/tex] above. seems as though that [tex]f_{3}[/tex] is a better approximation.

    it's kinda weird how [tex]f_{3}[/tex] looks like an exponential but [tex]\log f_{3}[/tex] looks like a log.
     
    Last edited: Nov 23, 2003
  18. Nov 24, 2003 #17
    schröder's equation

    a perhaps nicer related functional equation is schröder's equation. given [tex]g[/tex], the goal is to find a function [tex]s[/tex] such that for a nonzero constant [tex]\mu[/tex], [tex]s \circ g= \mu s[/tex]. if [tex]s[/tex] is invertible and the domains/ranges match up right, then [tex]g^{m} \left( x \right) =s^{-1} \left( \mu ^{m} s \left( x \right) \right)[/tex]. the solutions to schröder's equation form a vector space, a kind of eigenspace.

    there is a relationship between the set of solutions to schröder's equation and the set of solutions to abel's equation given by the [tex]log[/tex] function. at least when domains are possibly complex, the solution set of one is empty if and only if the solution set of the other is empty.
     
    Last edited: Nov 24, 2003
  19. Nov 24, 2003 #18
    does anyone have access to springer articles?

    can you send link.springer.de/link/service/journals/00010/bibs/1061001/10610001.htm[/URL] to [email]phoenix@alephnulldimension.net[/email]?
     
    Last edited by a moderator: Apr 20, 2017
  20. Nov 26, 2003 #19
    a similar thing was done with abel's equation. now i get how definining f or the solution to abel's/schroeder's equation on some interval will then lock in place all other values but i don't think this can be done in an arbitrary way. how is it ensured, for example, that the arbitrary assignment of values to f over the range [-oo,b] will lead to satisfaction of f(f(x))=2^x?

    btw, from my estimates, b should be about -0.88 and f(0) should be about 0.545.
     
    Last edited: Nov 26, 2003
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook




Similar Discussions: Half iterate of 2^x
  1. X^x = -2 (Replies: 19)

  2. X = 2^x (Replies: 6)

  3. X^x^x^x^x = 2 (Replies: 2)

  4. 2^x = x^12 (Replies: 7)

Loading...