Fractional iterates of analytic functions

1. Dec 24, 2004

phoenixthoth

...having at least one fixed point.

let g be a given function with fixed point p.

say g is defined on R though it could be C.

here's how we can approximate the fractional iterates of g:

expand the series for the nth iterate of g, denoted g^n, about p.

g^n(p)=p.

(g^n)'(p)=g'(g^(n-1)(p))*g'(g^(n-2)(p))*...*g'(g(p))*g'(p)=g'(p)^n.

IE, the first derivative at p to the nth power.

(g^n)''(p)=(g'(g^(n-1)(x))*g'(g^(n-2)(x))*...*g'(g(x))*g'(x))'|x=p
When you do this and simplify the geometric series, you get
(g^n)''(p)=( g'(p)^(n-1) ((g'(p)^n) - 1) g''(p) ) / ( g'(p)-1).

The third derivative at p is considerably more complex but a CAS can do it easily.

Therefore, a second order approximation to g^n is this:
g^n(x) = g^n(p) + (g^n)'(p)(x-p) + (1/2)(g^n)''(p)(x-p)^2+O(x-p)^3.
g^n(x) = p + g'(p)^n(x-p) +
(1/2)( ( g'(p)^(n-1) ((g'(p)^n) - 1) g''(p) ) / ( g'(p)-1) ) ) (x-p)^2 + O(x-p)^3.

These formulas enable us to let n be a fraction or any number.

Example:
g(x)=e^x. The desire is to find a semi-iterate or so called "half exponential" function h such that h(h(x))=g(x); in the above, this would be accomplished by taking n to be 1/2.

Let p be a (complex) fixed point of g. FIx a branch of the square root function; ie make p^(1/2) consistant throughout.
g^n(x) = p + g'(p)^n(x-p) +
(1/2)( ( g'(p)^(n-1) ((g'(p)^n) - 1) g''(p) ) / ( g'(p)-1) ) ) (x-p)^2 + O(x-p)^3, ok?

Well, g'(x)=e^x and g''(x)=e^x, so g'(p)=g''(p)=p. So we have:
g^n(x) = p + p^n(x-p) +
(1/2)( ( p^(n-1) ((p^n) - 1) p) / ( p-1) ) ) (x-p)^2 + O(x-p)^3.

If n=1/2...

h(x)=p + SQRT(p)(x-p) + (1/2)( ( (SQRT(p) - 1) SQRT(p)) / ( p-1) ) ) (x-p)^2 + O(x-p)^3.

THere you have it, a half iterate of e^x to second order. I have a feeling that for real x this will all turn out real but I'm not sure.

Last edited: Dec 24, 2004
2. Dec 24, 2004

Hurkyl

Staff Emeritus
I know it's a good bet, but do you know exp(z) has a fixed point?

3. Dec 24, 2004

phoenixthoth

way back when, i approximated it with mathematica. it had a fixed point with modulus less than 2 or 3.

I don't have a CAS handy but I claim that
g(y)=exp(ycot(y))sin(y)-y has a zero where g maps R to R. The best way to see this is to look at the graph (i hope i'm right! ).

Then, taking such a y, let x=ycot(y). (assuming cot(y) is defined). Here, x is real.

Then cot(y)=x/y; ie, tan(y)=y/x.

Then e^x cos(y) + i e^x sin(y) = e^(ycot(y)) cos(y) + i e^(ycot(y)) sin(y).

Since y is a zero of g, we get that e^(ycot(y)) is ycsc(y).

e^x cos(y) + i e^x sin(y) = (ycsc(y)) cos(y) + i (ycsc(y)) sin(y)
=ycot(y) + i y.

x=ycot(y), so this equals x+iy, showing that x+iy is a fixed point of e^z. (z=x+iy.)

This argument is predicated on that g(y)=exp(ycot(y))sin(y)-y has a real zero.

Could you check that for me?

edit: it switches sign on [-1.3, -1.4] and is continuous on that interval so it is zero in that interval.

Last edited: Dec 25, 2004
4. Dec 25, 2004

Hurkyl

Staff Emeritus
Well, slight problem: 1/sin y = csc y, not sec y... so if y is a zero of g, you get exp(y cot y) = y csc y... but I doubt that was an essential mistake.

I wonder just what this says... this raises some questions:

What is the radius of convergence? Maybe exp doesn't have any half-iterates!

Are there any choices to make besides the choice of square root of p for g'(p)? This would imply the nice fact that exp has exactly two analytic half-iterates! What about the general case?

(Of course, I mean half-iterates with p a fixed point analytic near p)

There are interesting questions, too, about the general behavior of a half-iterate of exp, but ones that probably aren't relevant to this line of attack.

Last edited: Dec 25, 2004
5. Dec 25, 2004

phoenixthoth

you're right. I also doubt it's an essential mistake.

at least considering the domain of formal complex power series, it does have a half-iterate though, granted, that's probably not interesting. I have no idea how to find the radius without having a nice formula for the coefficients. Maybe the ratios of the coefficients are doable at least in this special case.

I'm not sure I follow. There is more than one conceivable value for SQRT(p) so more than one expansion will (at least formally) solve g o g = exp. (Sorry if I changed my g's around.)

There is an article on this (I think by kevin iga). It may be for e^x - 1, though, which I think has a real fixed point. Yeah, 0. I don't think he used power series centered at the fixed point though... http://www.math.niu.edu/~rusin/known-math/99/sqrt_exp [Broken]
http://math.stanford.edu/~iga/iter.dvi

And... I think there are more than one fixed point of e^z. So we'd have one series depending on one fixed point and another depending on another. Would they be different serieses? I would assume so. Therefore the radius of convergence would probably be less than the distance between fixed points. Either that or the radius is zero. However, due to some numerical examples I've done, the series does seem to converge on some disk.

edit: I fixed my mistake. All I did was assume that 1/sin * cos = cot, which is true and that 1/sin * sin = 1. I just used sec instead of csc but played with what I meant correctly.

Last edited by a moderator: May 1, 2017
6. Dec 25, 2004

Hurkyl

Staff Emeritus
Well, think of this: if g fixes all of the fixed points of exp, then g cannot have any points of order 2, and thus no points of order 4. Therefore, exp cannot have any points of order 2 -- all you need to do now is show exp has a point of order 2.

My gut really didn't like the thought of g being entire -- I'm happy to see a proof of that fact. I still unsure about half-iterates with fixed points, though.

I've been trying to imagine how the Riemann surface of a half-iterate of exp might look, without any success.

It dawns on me that there's the off chance that all of the analytic half-iterates are just different branches of the same function, but it's somewhat doubtful.

That post gives a way to "construct" a real analytic half-iterate... now I'm curious about how its continuation might look.

7. Dec 25, 2004

phoenixthoth

I have a *long* formula (rather, an algorithm) for computing the nth coefficient of the series for a function centered at its fixed point. I've attached it below.

Now I guess I'll try looking at the ratios of the coefficients to attack the radius of convergence problem. I'd be happy if it has a nonzero radius and I don't expect it to be entire, either.

Oh and to solve z=exp(z), I think we can use the ProductLog function, which is the inverse of zexp(z). We see that
1/z=exp(-z), so -1=-zexp(-z). ProductLog(-1)=-z --->
z=-ProductLog(-1).
I would assume that ProductLog (aka the Lambert W-function) has branches like log and so there are infinitely many fixed points. You'd expect this as e^z is periodic. I haven't tried to solve exp(exp(z))=z but it's hopefully as easy.

Oh and 'bout the attachment: I haven't rigorously proof-read it. I just popped it out one night and shelved it until now.

8. Dec 25, 2004

tongos

I heard lambert, so i'll post. And approximations
i had a theory on the x^x=y function during the summer of last year. Its a pretty simple one too. Let the sensitivity of a variable be the derivative, as it is. for x^2 is generally more sensitive to change then x is.
x^x=5
i will approximate x as 2, 2^x=5 and i get x.

two equations
y^(log5/log2)=k (derivative when y=2)
2^x=b (derivative when x=log5/log2)
and then you do the weighted averages of the values, the addition of the derivatives being the amount of numbers. And the value of the derivatives being the amount of "weight" or sensitivity. somehow, i came out with this as a good approximation (xdx+ydy)/(dx+dy) where x is the approximation
just an idea.