Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

Taylor method for IVPs

  1. Oct 31, 2008 #1
    Consider the IVP:
    [tex]
    \left. \begin{array}{l}
    \frac {dy} {dx} = f(x,y) \\
    y( x_{0} ) = y_{0}
    \end{array} \right\} \mbox{ze IVP :p}
    [/tex]

    Hypothesis:
    [tex] f(x,y)\subset C^\infty_{x,y}(D)\; \; / \; \;(x_0,y_0)\in D [/tex]

    [Note that this condition automatically satisfies the hypotheses of the Existence and Uniqueness Theorem (ie, [tex] f \in C_{x,y}(D)[/tex] and [tex]f \in L_y(D)[/tex] ([tex] L [/tex]=Lipschitzian)), hence we know that a unique solution to the IVP does indeed exist at least in a certain interval centred around [tex] x_0, \; x \in |x-x_0| \leq h [/tex]].

    So then, let [tex]y(x)[/tex] be the solution to the IVP (at least in that interval [tex]x \in |x-x_0| \leq h [/tex]). Then we know that in that interval the ode is satisfied, and so,

    [tex]\frac {dy} {dx} = f(x,y) \; \; \rightarrow \; \; y'(x_0)=f(x_0,y(x_0))=f(x_0,y_0)[/tex]

    [tex]\frac {d^2y} {dx^2} = \frac {d} {dx} f(x,y)= \frac {\partial f} {\partial x} \frac {dx} {dx} \; + \; \frac {\partial f} {\partial y} \frac {dy} {dx} \\
    \indent \rightarrow \; \; y''(x_0)= \frac {\partial f} {\partial x}|_{(x_0,y(x_0))} \; + \; \frac {\partial f} {\partial y}|_{(x_0,y(x_0))}\cdot f(x_0,y_0) [/tex]

    [tex]\frac {d^3y} {dx^3} = \frac {d} {dx} (\frac {d^2y} {dx^2}) = \frac {\partial ^2f} {\partial x^2} \frac {dx} {dx} \; + \; \frac {\partial ^2f} {\partial y^2} \frac {dy} {dx} \; + \; \frac {\partial^2 f} {\partial x \partial y } \frac {dy} {dx} \; + \; \frac {\partial ^2f} {\partial y^2}(\frac {dy} {dx})^2 \; + \; \frac {\partial f} {\partial y} \frac {d^2y} {dx^2}\\ = \frac {\partial ^2f} {\partial x^2} \; + \; f\cdot \frac {\partial ^2f} {\partial y^2} \; + \; \frac {\partial f} {\partial y} (\frac {\partial f} {\partial x} \; + \; f\cdot \frac {\partial f} {\partial y}) \; + \; f\cdot \frac {\partial^2 f} {\partial x \partial y } \; + \; f^2 \frac {\partial ^2f} {\partial y^2} \\ \;
    \indent \rightarrow \; \; [/tex] get [tex]y'''(x_0)[/tex] -wipes brow-


    [Note that for the derivatives of y to exist we need the derivatives of all orders of [tex]f [/tex] to exist, at least in the small region around [tex](x_0, y_0)[/tex], which justifies the hypothesis of the method].

    So! We can then construct the Taylor series:

    [tex]y(x_0)+y'(x_0)\cdot (x-x_0) + y''(x_0)\cdot \frac {(x-x_0)^2}{2!} + . . . [/tex]

    BUT!!

    1) How do you know that the Taylor series converges around x0 ? (Do all Taylor series converge in some interval around x0? :P )
    2) Okay so assuming the Taylor series converges in some interval around x0, how do we know that it is equal to the function [tex]y(x)[/tex] solution of the IVP? That [tex]y(x) \in C^\infty _x [/tex] does NOT mean that [tex]y(x)[/tex] is analytic! :>
    PALEEZE HELP!! 0:
     
  2. jcsd
  3. Nov 1, 2008 #2

    HallsofIvy

    User Avatar
    Staff Emeritus
    Science Advisor

    Yes. You should have learned back in calculus that a power series always has a radius of convergence. (Which may be 0 but assuming the differential equation is valid in some interval, the power series must converge to that function.)

    True, but the "existence" part of the "existence and uniqueness" theorem assures us there must be an analytic solution to the differential equation and the "uniqueness" part assures us it must be equal to the series derived from the equation.
     
  4. Nov 1, 2008 #3
    HallsofIvy, thank you very much for your help.

    Now, about:
    I'm sorry, but I do not quite see your logic :S Okay, all power series have a radius of convergence (which may be 0 as you pointed out). Basically all I've done is constructed a power series based on being able to determine the derivates of y(x) (the solution to the IVP, which is unknown to me) through the relationship provided by the ode between the derivatives of y and those of f. In the general case, all I will be able to do is calculate a few terms of this series (or as many as I want), but not be able to identify it with a known function (unless I am lucky:P). And, in general, I will not be able to deduce a formula for the general (nth) term of the series, so with this itself, I can't in general determine its radius of convergence.

    But I think the answer I'm looking for lies in your reply to my second question:

    Okay! let me just paraphrase you to see if I understand:P Thanks to the "existence" part of the "existence and uniqueness" theorem, I know that there is a solution to the IVP in an interval around x0, which is what I've called y(x) and the Taylor series I constructed was using the derivatives of the solution y(x) evaluated at x0.

    How do I know that this solution, y(x), is analytic?

    I know that this solution, y(x), is infinitely derivable at least in an interval around x0 because that was the hypothesis of Taylor's method (well indirectly through the analicity of f and the fact that y(x) satisties the ode). But that is not suffient to assure that y(x) is analytic!

    IF I knew that y(x) was analytic in a certain interval around x0, then I could say that the Taylor Series constructed with its derivatives is equal to the function itself for all x in that interval. So yes! . . . but all I have is a finite number of terms of the series, and I don't know the function y(x) itself. How then can I say that y(x) is analitic? :(

    Is this where we make use of the "uniqueness" part of the "existence and uniqueness" theorem? If the series whose terms we calculate with this method is indeed solution to the IVP in an interval surrounding x0, then it must equal y(x) in that interval since y(x) is the solution. But! how do I know that the series we are obtaining with this method is indeed solution to the IVP in a certain interval around x0? :(

    Something just seems to be missing to be able to make deductions :(


    Let me try and illustrate my point. For example, suppose a known function y(x) is infinitely derivable but not analytic around x0. Now suppose we write (ie. invent) a first order ode that is satisfied by y(x) (we can do this, no?), and suppose that y(x0) = y0.

    Now suppose we construct the Taylor Series of y(x) centred around x0.

    But because y(x) is not analytic, it does not equal its Taylor Series in any interval around x0. So the series is NOT solution the the IVP. Yet it is the same series we would have constructed using Taylor's method for solving the IVP had we not known y(x) explicitly.
    Isn't this scenario possible?

    Because if this is possible, then, in our case, everything is the same except we do not know y(x). We can construct its Taylor Series. But how do we know y(x) is analytic, or how do we know that the series is indeed solution to the IVP in some small interval surrounding x0? Because if y(x) were not analytic, we'd be in the same scenario as in the example I illustrated, no?

    I think I am missing something :(
     
    Last edited: Nov 1, 2008
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook

Have something to add?



Similar Discussions: Taylor method for IVPs
  1. Problem with an IVP (Replies: 3)

  2. Solve an IVP (Replies: 1)

Loading...