Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

Looking for nontrivial solutions to this series equation

  1. Aug 27, 2004 #1
    the equation is this:
    [tex]\sum\limits_{n=0}^{\infty }ra_{n}x^{n}=\sum\limits_{n=0}^{\infty }a_{n}\left[ \sum\limits_{k=0}^{n}\binom{n}{k}\left( -2\right) ^{n-k}x^{2k}\right] [/tex]

    The goal is to solve for the a_n's given that I want them to not all be 0 and r is also free to be anything except 0 and 1. Complex solutions admissable if those are all there are.

    It seems like the odd a_n's are all zero.

    This question arose when I tried to solve Schroder's equation
    f(g(x))=rf(x)
    for g(x)=x^2-2
    in which the goal is to determine f (which I'm assuming for the moment is [tex]\sum\limits_{n=0}^{\infty }a_{n}x^{n}[/tex]) and r not 0 or 1 that fits the equation on the biggest domain possible, if not all of R or C. Either that or prove that f must be identically 0 or some such trivial solution.

    The goal is to fractionally iterate functions. x^2-2 is known to have fractional iterates that are nicely defined so I was hoping that Schroder's equation would have a nice solution.
     
  2. jcsd
  3. Aug 27, 2004 #2

    HallsofIvy

    User Avatar
    Staff Emeritus
    Science Advisor

    I assume that you recognize that [tex][ \sum\limits_{k=0}^{n}\binom{n}{k}\left( -2\right) ^{n-k}x^{2k}\right] [/tex]
    is (x2-2)n?
     
  4. Aug 28, 2004 #3
    How does that help?
     
  5. Aug 28, 2004 #4
    I am not sure that it is useful, but here are some of my thoughts on the equation.

    1. If f exists, f is even, because,
    f(x) = f(x^2-2)/r = f(-x) (r !=0)
    This coinsides with your statement that "the odd a_n's are zero".

    2. f = 0 at 0, 1, 2, -1, -2 (if f exists)
    Consider the "fixed points" of g(x) i.e. find x such that g(x) = x.
    x^2 - 2 = x => x=-1 or 2
    let y be a fixed point. Then at y,
    f(y) = f(g(y))/r = f(y)/r
    Now r does not equal 1, so f(y) can only be zero. So f(-1) = f(2) = 0 and because f is even f(1) = f(-2) = 0.

    Since f(y) = f o g(y)/r = f o g o g(y)/r^2 = f o g o g og(y)/r^3, at first I "hope" that the iterate g o g o g.....(y) would converge for a certain range of y. I try, but the iterate seems not to converge except at -2, -1, 0, 1, 2. If the interate converges for a certain range of y, then f ought to be zero in that range of y.

    Moreover, I just wonder if one may convert the equation into a differential equation....maybe it's not possible, I'm not sure.
     
    Last edited: Aug 28, 2004
  6. Aug 28, 2004 #5

    matt grime

    User Avatar
    Science Advisor
    Homework Helper

    because the way to solve series equations like this is to equate coeffincients of powers of x and in the original formula it isn't clear what the coeff of any given power is, with that simplification it becomes clearer.
     
  7. Aug 28, 2004 #6
    Just think of something new...

    Maybe one should consider the "inverse" of g.
    y = x^2 - 2
    x=+/-sqrt(y+2)
    We take the "positive" part of the "inverse" and denote the "inverse" of g by g^-1. Then, f(x) = r*f o g^-1(x) = r^2*f o g^-1 o g^-1 (x)...for all positive x. So we consider instead the interate g^-1 o g^-1 o g^ -1 ... (x). To see whether the interate converges, note that dg^-1(x)/dx = d(sqrt(x+2))/dx = 1/(2*sqrt(x+2)). For x>=0, dg^-1(x)/dx <0.9. By the mean value theorem, (for p, q>=0)

    |g^-1(p) - g^-1(q)|
    = |dg^-1/dx (u)| |p - q| for some u from [p, q]
    < 0.9*|p - q|

    now g^-1 is a function from [0, inf) into [0, inf) and so by the Banach fixed point theorem, there is a unique fixed point for the function. (is this correct?) In fact the fixed point is just x = 2. Thus the interate converges for all x>=0.

    Now let g^-n be the n times iterate of g^-1. By the result above,

    f(x) = r^n*f o g^-n(x)

    We let n tend to infinity on the RHS. Note that lim g^-n(x) = g^-1(2) = 2 for x>=0. If |r|<1, then lim r^n = 0 and so f(x) = 0 for all x>=0. Since f is even, f(x) = 0 for all x.

    Thus, IF |r|<1, and f is differentiable, then it "seems" that the only solution is the trivial solution.
     
  8. Sep 2, 2004 #7
    isn't it a bit easier to equate coefficients in the form given as opposed to the form with (x^2-2)^n in it? how do you equate coefficients when you have x^n's on one side and (x^2-2)^n's on the other side???

    wong: thanks for that proof. I will look it over when i get more time and see if it's correct. if it is that would be enormously cool although what it means is bad cuz i'd like there to be a nontrivial solution... in addition, the techniques you use may be quite helpful for me. the next function to work on is 2^x as opposed to x^2-2...
     
  9. Sep 2, 2004 #8
    ok cool but how can we prove that |r|<1?

    could there be a nontrivial f in the case that |r|>1?
     
  10. Sep 4, 2004 #9
    Frankly, I don't know for the case |r|>1. And in fact my so-called "proof" only says that the only f *on an interval containing a fixed point* e.g. x=2 that satisfies |r|<1 is the trivial solution. Yes it is quite restrictive and you may say it's rubbish...

    For the case |r|>1, your method of series expansion is a good idea. I didn't solve it and I guess that it may not be easy to solve...
     
  11. Sep 4, 2004 #10
    oh....I've been silly.

    This is a partial result applying to all r.

    Your method assumes that f(x) has a series expansion about 0. That means f(x) should have derivatives of all orders inside it's radius of convergence (by Abel's theorem?). Now assume the interval of convergence contains the point -2. Then by your equation,

    [tex]f^{\prime}(x) = 2xrf^{\prime}(x^{2}-2)[/tex]
    [tex]f^{\prime}(0) = 2(0)rf^{\prime}(-2)=0[/tex]

    Think about it. Whenever you differentiate [tex]f(x^{2}-2)[/tex] you get an extra factor 2x. Thus by applying Lebnitz' rule of differentiation, you will find that derivatives of f(x) of all orders vanish at 0. This means that the radius of convergence of your series cannot exceed 2, since otherwise the only "series solution" is the zero solution.
     
  12. Sep 4, 2004 #11

    Hurkyl

    User Avatar
    Staff Emeritus
    Science Advisor
    Gold Member

    Well, one thing we could do is to track down the points whose values we know.

    From the functional equation, f(x^2-2) = rf(x), we see that if x^2-2 = x then f(x) = 0. The solutions to this equation are x = -1, x = 2.

    Then, look for period 2 points under the iteration x -> x^2 - 2:

    (x^2-2)^2 - 2 = x
    x^4 - 4x^2 + 4 -2 = x
    x^4 - 4x^2 - x + 2 = 0

    Factoring out (x-2)(x+1) leaves us with x^2 + x - 1, or x = (+/-sqrt(5)-1)/2, that is 0.618... or -1.618...

    so we have f(x) = f(g(g(x)) = r f(g(x)) = r^2 f(x), so we again have that f(x) = 0 for x = 0.618 and -1.618.

    Similarly, if x is any periodic point under action by g, then f(x) = 0.

    My gut tells me that these periodic points are dense in the interval [-2, 2], so if f is supposed to be continuous, then f = 0 on this interval.
     
  13. Sep 4, 2004 #12

    matt grime

    User Avatar
    Science Advisor
    Homework Helper

    irrespective of when you equate coefficients, that identity will remain true, but if you equate them before applying it you'll have a complicated set of identities to solve with that summation involved in some possibly disguised form, so why not use it before hand?


    you'll presumably have tried to work it out, we almost certainly haven't bothered, but it is still a good suggestion, and one you should at least have tried, seeing as you couldn't see another way or solving it.
     
    Last edited: Sep 4, 2004
  14. Sep 4, 2004 #13
    A somewhat "funny" result: if f(x) is required to satisfy the relation only on the interval (-1,1), then *any* function would trivially satisfy the relation.

    The reason is that the relation is no relation when it doesn't make sense. For example when x=1/2, your relation reads "rf(1/2)=f(-7/4)". Since -7/4 is not in our interval of consideration, the relation is "null"/not a restriction on the value of f(1/2).

    If -1 < x < 1, then -2 < x^2-2 <-1. Note that (-2,-1) and (-1,1) are disjoint, so your relation places no retriction on the value of function in the "forward" direction.

    On the other hand, assume that x = z^2-2 for some z. Then if -1<x<1, one can check that 1<z<sqrt(3) or -sqrt(3)<z<-1. Again your relation places no restriction on the functional value in the "backward" direction.

    The assertion is thus proved.
     
  15. Sep 4, 2004 #14
    lots of good thoughts. thanks.

    still assuming that f is differentiable, we have as before that
    f'(g(x))g'(x)=rf'(x).

    let p be a fixed point of g(x)=x^2-2; eg, p is either -1 or 2 in this case. then
    f'(g(p))g'(p)=rf'(p)
    and since g(p)=p,
    f'(p)g'(p)=rf'(p).
    then f'(p)(g'(p)-r)=0.
    therefore, either f'(p)=0 or r=g'(p).

    if we plug p back into the original equation we get
    f(g(p))=rf(p) which reduces to
    f(p)=rf(p) so that
    f(p)(1-r)=0.
    therefore f(p)=0 or r=1. assume r!=1. then f(p)=0.

    i want to say that r=g'(p)... in this case, r could be -2 or 4 both of whose abs values exceed 1. but i have to rule out the possibility that f'(p)=0.

    hrm... if i add the condition (making up conditions as i go along here -- what the who?) that f is invertible at all fixed points of g then f'(p)!=0. :P

    i would like f to be invertible anyway so that i can write g as
    f^{-1}(rf(x)).

    the uberultimate goal is to find a nice formula for the nth iterate of g:
    f^{-1}(r^n f(x)).

    anyways, i'd like f to have an inverse anyway so assuming its invertible at fixed points of g is no sweat. hence f'(p)!=0 and r=g'(p). is this ok so far?

    in this case, let's say now that r=4=g'(2). (2 is a fixed point of g.)

    f(x^2-2)=4f(x).

    here's the mindjob that gets me: differentiate and plug in 2:
    2xf'(x^2-2)=f'(x) ./ x-->2
    4f'(0)=f'(2)
    now differentiate and plug in 0:
    2xf'(x^2-2)=f'(x) ./ x-->0
    f'(0)=0

    therefore, f'(2)=4*0=0. but i already assumed f'(2)!=0. does this basically prove that there is no invertible f that solves the equation at least in an interval containing 2?
     
  16. Sep 6, 2004 #15
    progress

    Mathematica comes to the rescue.

    f(x) is to be determined; g(x)=x^2-2; p=2, a fixed point of g.

    I told mathematica to solve the set of equations basically equating the coefficients in the two series f(g(x)) and 4f(x). As above, r=g'(p)=4 in the case that f'(p)!=0.

    I could have sworn I've done this before but this time for some reason I got results.

    I get the following:
    f(p)=0. This is obtainable from the original equation assuming that r is not 0 nor 1.

    f'(p) I get to be arbitrary (which indicates that there are infinitely many solutions depending on a constant). So I let f'(p)=1 because I figured that would be easy to work with.

    f''[p]=-1/6

    f'''[p]=1/15

    f^(4)[p]=-3/70

    f^(5)[p]=4/105

    I resolved the system of equations starting with series that were of order higher than five and the first five derivatives of f came out the same except perhaps the fifth.

    This leads us to a series approximation for f for x near p=2:
    f(x) ~ (x-2)-(x-2)2/12+(x-2)3/90-(x-2)4/560+(x-2)5/3150-...

    You can verify numerically that for x near 2, f(x^2-2) ~ 4 f(x). Example:
    Let x=2.01. f(2.02)~0.00999168, f(g(2.01))=f(2.0401)~0.039967 and note that 4*0.00999168~0.039967. Yippie!

    I plotted a graph of f(g(x))-4f(x) and get a function that is close to zero for x near 2; in fact it is indistinguishable from the x-axis for x between 1 and 3 from the right/wrong plot range.

    I again used mathematica to find a series for the inverse of f. Let F denote this series:
    F(x)~2+x+x2/12+x3/360+x4/20160+x5/1814400-187x6/3110400-8593x7/304819200-...

    Next, I plotted F(4f(x)) and compared that with the graph of g(x) near 2. They're very close for x between 1 and 3. I told it to simplify F(4f(x)) and I get g(x) + terms of order 6 and greater which is to be expected as I only found the first five derivatives of f.

    I'm assuming that to find some fractional nth iterates of g, I can use the formula
    F(4^n f(x)) for x near 2.


    Example: the half iterate of 2.01 is F(2f(2.01))~2.02002.

    I suppose my next task is to try to find a formula for the coefficients of f and F...

    btw, the formulas I got for the derivatives of f are as follows (up to the 4th):
    [tex] f\left( p\right) =0[/tex]
    [tex]f^{\prime }\left( p\right) =1[/tex]
    [tex]f^{\prime \prime }\left( p\right) =\frac{g^{\prime \prime }\left( p\right) }{r\left( 1-r\right) }[/tex]
    [tex]f^{\prime \prime \prime }\left( p\right) =\frac{3g^{\prime \prime }\left( p\right) ^{2}+\left( r-1\right) g^{\prime \prime \prime }\left( p\right) }{r\left( 1+r\right) \left( r-1\right) ^{2}}[/tex] and
    [tex]f^{\left( 4\right) }\left( p\right) =\frac{-3\left( 1+5r^{2}\right) g^{\prime \prime }\left( p\right) ^{3}+2r\left( r-1\right) \left( 5r+2\right) g^{\prime \prime }\left( p\right) g^{\left( 3\right) }\left( p\right) -r\left( r-1\right) ^{2}\left( 1+r\right) g^{\left( 4\right) }\left( p\right) }{r^{2}\left( r-1\right) ^{3}\left( r+1\right) \left( 1+r+r^{2}\right) }[/tex]
    where
    [tex]r=g^{\prime }\left( p\right) [/tex].

    Is there a pattern?

    edit:
    A formula I get for the nth iterate of g near a fixed point p is this:
    [tex]g^{n}\left( x\right) =p+r^{n}\left( x-p\right) +\frac{r^{n-1}\left( r^{n}-1\right) g^{\prime \prime }\left( p\right) }{2\left( r-1\right) }\left( x-p\right) ^{2}+O\left( x-p\right) ^{3}[/tex].

    In the case that g(x)=x^2-2 and p=2, we get
    [tex]g^{n}\left( x\right) =2+4^{n}\left( x-2\right) +\frac{4^{n-1}\left( 4^{n}-1\right) }{3}\left( x-2\right) ^{2}+O\left( x-2\right) ^{3}[/tex].

    In particular, the first few terms for the half iterate of g is this:
    [tex]g^{1/2}\left( x\right) =2+2\left( x-2\right) +\frac{1}{6}\left( x-2\right) ^{2}+O\left( x-2\right) ^{3}[/tex].

    The next thing I want to work on is when g(x)=2^x...
     
    Last edited: Sep 6, 2004
  17. Sep 7, 2004 #16
    Sorry for posting twice but I couldn't edit my last post for some reason...

    One cool thing about working with g(x)=x^2-2 is that there is a known formula for the iterates of g which was found by Ramanajuan and perhaps others.

    [tex]g^{n}\left( x\right) =y^{2^{n}}+y^{-2^{n}}[/tex] where
    [tex]y=\frac{x+\sqrt{x^{2}-4}}{2}[/tex].

    I work out the first three terms in the series expansion of the above formula for the nth iterate of g and get this:
    [tex]g^{n}\left( x\right) =2+4^{n}\left( x-2\right) +\frac{1}{3}4^{n-1}\left( 4^{n}-1\right) \left( x-2\right) ^{2}+\frac{1}{45}2^{2n-3}\left( 4-5\cdot 4^{n}+16^{n}\right) \left( x-2\right) ^{3}+O\left( x-2\right) ^{4}[/tex].

    When I do it by solving Schroeder's equation (f(g(x))=rf(x)) and find a series for [tex]f^{-1}(4^{n}f(x))[/tex], which is supposed to be the nth iterate of g, I get the above series.

    That means that there is probably some kind of uniqueness going on and frankly, I'm surprised they're the same since I assumed the first derivative of f at 2 is 1 when it is arbitrary. Maybe it would all work out when I simplify [tex]f^{-1}(4^{n}f(x))[/tex] regardless of what I assume f'(2) to be.

    Blood pumps through the method of finding fractional iterates via Schroder's equation once again!
     
  18. Sep 10, 2004 #17
    duh?

    I've been toying around with this and found a formula for the nth iterate of g centered at p, a fixed point of g.

    [tex]p+g^{\prime }\left( p\right) ^{n}\left( x-p\right) +\frac{g^{\prime }\left( p\right) ^{n-1}\left( g^{\prime }\left( p\right) ^{n}-1\right) g^{\prime \prime }\left( p\right) }{2\left( g^{\prime }\left( p\right) -1\right) }\left( x-p\right) ^{2}+\frac{Q}{6\left( g^{\prime }\left( p\right) -1\right) ^{2}\left( g^{\prime }\left( p\right) +1\right) }\left( x-p\right) ^{3}+O\left( x-p\right) ^{4}[/tex]
    where
    [tex]Q = g^{\prime }\left( p\right) ^{n-2}\left( g^{\prime }\left( p\right) ^{n}-1\right) ($\left( 3g^{\prime \prime }\left( p\right) ^{2}+\left( g^{\prime }\left( p\right) -1\right) g^{\prime }\left( p\right) g^{\prime \prime \prime }\left( p\right) \right) g^{\prime }\left( p\right) ^{n}+\left( \left( g^{\prime }\left( p\right) -1\right) g^{\prime \prime \prime }\left( p\right) -3g^{\prime \prime }\left( p\right) ^{2}\right) g^{\prime }\left( p\right) )[/tex].

    This was a result of getting the first few terms of the solution to Schröder's equation.

    Then I thought there might be another way and there is.

    It's easier to directly compute the derivatives of [tex]g^{n}[/tex] and evaluate at p because [tex]g^{k}(p)=p[/tex]. I get the same first terms as above if I recall that [tex]1+x+...+x^{m-1}=\frac{x^{m}-1}{x-1}[/tex].

    So now the question reduces to this:
    is there a formula for the kth derivative of the nth iterate of g evaluated at p?

    Fix n and let [tex]h\left( x\right) =g^{n}\left( x\right) [/tex]. If we knew the kth derivative of h then we could write out a formula for the series. A secondary goal is to use the geometric sum formula and whatever else to reduce the coefficients of the series for h into a form that does not involve a sum depending on n.

    Then we will have the ability to rule the universe :surprised or at least fractionally iterate g. In short, the formula should be something in which we can let n be any real number.
     
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook

Have something to add?



Similar Discussions: Looking for nontrivial solutions to this series equation
Loading...