How Does Taylor's Method Apply to Initial Value Problems?

In summary: This means that the Taylor series for a function will converge in a certain interval around the center point x0. In this case, since the differential equation is valid in some interval, the Taylor series must converge to that function within that interval.
  • #1
BobbyBear
162
1
Consider the IVP:
[tex]
\left. \begin{array}{l}
\frac {dy} {dx} = f(x,y) \\
y( x_{0} ) = y_{0}
\end{array} \right\} \mbox{ze IVP :p}
[/tex]

Hypothesis:
[tex] f(x,y)\subset C^\infty_{x,y}(D)\; \; / \; \;(x_0,y_0)\in D [/tex]

[Note that this condition automatically satisfies the hypotheses of the Existence and Uniqueness Theorem (ie, [tex] f \in C_{x,y}(D)[/tex] and [tex]f \in L_y(D)[/tex] ([tex] L [/tex]=Lipschitzian)), hence we know that a unique solution to the IVP does indeed exist at least in a certain interval centred around [tex] x_0, \; x \in |x-x_0| \leq h [/tex]].

So then, let [tex]y(x)[/tex] be the solution to the IVP (at least in that interval [tex]x \in |x-x_0| \leq h [/tex]). Then we know that in that interval the ode is satisfied, and so,

[tex]\frac {dy} {dx} = f(x,y) \; \; \rightarrow \; \; y'(x_0)=f(x_0,y(x_0))=f(x_0,y_0)[/tex]

[tex]\frac {d^2y} {dx^2} = \frac {d} {dx} f(x,y)= \frac {\partial f} {\partial x} \frac {dx} {dx} \; + \; \frac {\partial f} {\partial y} \frac {dy} {dx} \\
\indent \rightarrow \; \; y''(x_0)= \frac {\partial f} {\partial x}|_{(x_0,y(x_0))} \; + \; \frac {\partial f} {\partial y}|_{(x_0,y(x_0))}\cdot f(x_0,y_0) [/tex]

[tex]\frac {d^3y} {dx^3} = \frac {d} {dx} (\frac {d^2y} {dx^2}) = \frac {\partial ^2f} {\partial x^2} \frac {dx} {dx} \; + \; \frac {\partial ^2f} {\partial y^2} \frac {dy} {dx} \; + \; \frac {\partial^2 f} {\partial x \partial y } \frac {dy} {dx} \; + \; \frac {\partial ^2f} {\partial y^2}(\frac {dy} {dx})^2 \; + \; \frac {\partial f} {\partial y} \frac {d^2y} {dx^2}\\ = \frac {\partial ^2f} {\partial x^2} \; + \; f\cdot \frac {\partial ^2f} {\partial y^2} \; + \; \frac {\partial f} {\partial y} (\frac {\partial f} {\partial x} \; + \; f\cdot \frac {\partial f} {\partial y}) \; + \; f\cdot \frac {\partial^2 f} {\partial x \partial y } \; + \; f^2 \frac {\partial ^2f} {\partial y^2} \\ \;
\indent \rightarrow \; \; [/tex] get [tex]y'''(x_0)[/tex] -wipes brow-


[Note that for the derivatives of y to exist we need the derivatives of all orders of [tex]f [/tex] to exist, at least in the small region around [tex](x_0, y_0)[/tex], which justifies the hypothesis of the method].

So! We can then construct the Taylor series:

[tex]y(x_0)+y'(x_0)\cdot (x-x_0) + y''(x_0)\cdot \frac {(x-x_0)^2}{2!} + . . . [/tex]

BUT!

1) How do you know that the Taylor series converges around x0 ? (Do all Taylor series converge in some interval around x0? :P )
2) Okay so assuming the Taylor series converges in some interval around x0, how do we know that it is equal to the function [tex]y(x)[/tex] solution of the IVP? That [tex]y(x) \in C^\infty _x [/tex] does NOT mean that [tex]y(x)[/tex] is analytic! :>
PALEEZE HELP! 0:
 
Physics news on Phys.org
  • #2
BobbyBear said:
Consider the IVP:
[tex]
\left. \begin{array}{l}
\frac {dy} {dx} = f(x,y) \\
y( x_{0} ) = y_{0}
\end{array} \right\} \mbox{ze IVP :p}
[/tex]

Hypothesis:
[tex] f(x,y)\subset C^\infty_{x,y}(D)\; \; / \; \;(x_0,y_0)\in D [/tex]

[Note that this condition automatically satisfies the hypotheses of the Existence and Uniqueness Theorem (ie, [tex] f \in C_{x,y}(D)[/tex] and [tex]f \in L_y(D)[/tex] ([tex] L [/tex]=Lipschitzian)), hence we know that a unique solution to the IVP does indeed exist at least in a certain interval centred around [tex] x_0, \; x \in |x-x_0| \leq h [/tex]].

So then, let [tex]y(x)[/tex] be the solution to the IVP (at least in that interval [tex]x \in |x-x_0| \leq h [/tex]). Then we know that in that interval the ode is satisfied, and so,

[tex]\frac {dy} {dx} = f(x,y) \; \; \rightarrow \; \; y'(x_0)=f(x_0,y(x_0))=f(x_0,y_0)[/tex]

[tex]\frac {d^2y} {dx^2} = \frac {d} {dx} f(x,y)= \frac {\partial f} {\partial x} \frac {dx} {dx} \; + \; \frac {\partial f} {\partial y} \frac {dy} {dx} \\
\indent \rightarrow \; \; y''(x_0)= \frac {\partial f} {\partial x}|_{(x_0,y(x_0))} \; + \; \frac {\partial f} {\partial y}|_{(x_0,y(x_0))}\cdot f(x_0,y_0) [/tex]

[tex]\frac {d^3y} {dx^3} = \frac {d} {dx} (\frac {d^2y} {dx^2}) = \frac {\partial ^2f} {\partial x^2} \frac {dx} {dx} \; + \; \frac {\partial ^2f} {\partial y^2} \frac {dy} {dx} \; + \; \frac {\partial^2 f} {\partial x \partial y } \frac {dy} {dx} \; + \; \frac {\partial ^2f} {\partial y^2}(\frac {dy} {dx})^2 \; + \; \frac {\partial f} {\partial y} \frac {d^2y} {dx^2}\\ = \frac {\partial ^2f} {\partial x^2} \; + \; f\cdot \frac {\partial ^2f} {\partial y^2} \; + \; \frac {\partial f} {\partial y} (\frac {\partial f} {\partial x} \; + \; f\cdot \frac {\partial f} {\partial y}) \; + \; f\cdot \frac {\partial^2 f} {\partial x \partial y } \; + \; f^2 \frac {\partial ^2f} {\partial y^2} \\ \;
\indent \rightarrow \; \; [/tex] get [tex]y'''(x_0)[/tex] -wipes brow-


[Note that for the derivatives of y to exist we need the derivatives of all orders of [tex]f [/tex] to exist, at least in the small region around [tex](x_0, y_0)[/tex], which justifies the hypothesis of the method].

So! We can then construct the Taylor series:

[tex]y(x_0)+y'(x_0)\cdot (x-x_0) + y''(x_0)\cdot \frac {(x-x_0)^2}{2!} + . . . [/tex]

BUT!

1) How do you know that the Taylor series converges around x0 ? (Do all Taylor series converge in some interval around x0? :P )
Yes. You should have learned back in calculus that a power series always has a radius of convergence. (Which may be 0 but assuming the differential equation is valid in some interval, the power series must converge to that function.)

2) Okay so assuming the Taylor series converges in some interval around x0, how do we know that it is equal to the function [tex]y(x)[/tex] solution of the IVP? That [tex]y(x) \in C^\infty _x [/tex] does NOT mean that [tex]y(x)[/tex] is analytic! :>
PALEEZE HELP! 0:
True, but the "existence" part of the "existence and uniqueness" theorem assures us there must be an analytic solution to the differential equation and the "uniqueness" part assures us it must be equal to the series derived from the equation.
 
  • #3
HallsofIvy, thank you very much for your help.

Now, about:
1) How do you know that the Taylor series converges around x0 ? (Do all Taylor series converge in some interval around x0? :P )

Yes. You should have learned back in calculus that a power series always has a radius of convergence. (Which may be 0 but assuming the differential equation is valid in some interval, the power series must converge to that function.)

I'm sorry, but I do not quite see your logic :S Okay, all power series have a radius of convergence (which may be 0 as you pointed out). Basically all I've done is constructed a power series based on being able to determine the derivates of y(x) (the solution to the IVP, which is unknown to me) through the relationship provided by the ode between the derivatives of y and those of f. In the general case, all I will be able to do is calculate a few terms of this series (or as many as I want), but not be able to identify it with a known function (unless I am lucky:P). And, in general, I will not be able to deduce a formula for the general (nth) term of the series, so with this itself, I can't in general determine its radius of convergence.

But I think the answer I'm looking for lies in your reply to my second question:

2) Okay so assuming the Taylor series converges in some interval around x0, how do we know that it is equal to the function [tex] y(x) [/tex] solution of the IVP? That [tex] y(x) \in C^\infty _x [/tex] does NOT mean that [tex] y(x) [/tex] is analytic! :>

True, but the "existence" part of the "existence and uniqueness" theorem assures us there must be an analytic solution to the differential equation and the "uniqueness" part assures us it must be equal to the series derived from the equation.

Okay! let me just paraphrase you to see if I understand:P Thanks to the "existence" part of the "existence and uniqueness" theorem, I know that there is a solution to the IVP in an interval around x0, which is what I've called y(x) and the Taylor series I constructed was using the derivatives of the solution y(x) evaluated at x0.

How do I know that this solution, y(x), is analytic?

I know that this solution, y(x), is infinitely derivable at least in an interval around x0 because that was the hypothesis of Taylor's method (well indirectly through the analicity of f and the fact that y(x) satisties the ode). But that is not suffient to assure that y(x) is analytic!

IF I knew that y(x) was analytic in a certain interval around x0, then I could say that the Taylor Series constructed with its derivatives is equal to the function itself for all x in that interval. So yes! . . . but all I have is a finite number of terms of the series, and I don't know the function y(x) itself. How then can I say that y(x) is analitic? :(

Is this where we make use of the "uniqueness" part of the "existence and uniqueness" theorem? If the series whose terms we calculate with this method is indeed solution to the IVP in an interval surrounding x0, then it must equal y(x) in that interval since y(x) is the solution. But! how do I know that the series we are obtaining with this method is indeed solution to the IVP in a certain interval around x0? :(

Something just seems to be missing to be able to make deductions :(Let me try and illustrate my point. For example, suppose a known function y(x) is infinitely derivable but not analytic around x0. Now suppose we write (ie. invent) a first order ode that is satisfied by y(x) (we can do this, no?), and suppose that y(x0) = y0.

Now suppose we construct the Taylor Series of y(x) centred around x0.

But because y(x) is not analytic, it does not equal its Taylor Series in any interval around x0. So the series is NOT solution the the IVP. Yet it is the same series we would have constructed using Taylor's method for solving the IVP had we not known y(x) explicitly.
Isn't this scenario possible?

Because if this is possible, then, in our case, everything is the same except we do not know y(x). We can construct its Taylor Series. But how do we know y(x) is analytic, or how do we know that the series is indeed solution to the IVP in some small interval surrounding x0? Because if y(x) were not analytic, we'd be in the same scenario as in the example I illustrated, no?

I think I am missing something :(
 
Last edited:

What is the Taylor method for IVPs?

The Taylor method for IVPs is a numerical method used to solve initial value problems (IVPs) in differential equations. It involves using a Taylor series expansion to approximate the solution at a specific point in the domain.

What are some advantages of using the Taylor method for IVPs?

One advantage is that it is a very accurate method, especially for problems with smooth solutions. It also allows for flexibility in choosing the step size, making it useful for problems with varying solution behavior. Additionally, it can be used for both single and multi-step methods.

What are the limitations of the Taylor method for IVPs?

One limitation is that it can be computationally expensive, especially for higher order methods. It also requires knowledge of higher derivatives of the solution, which may not always be available. Additionally, it may not be stable for certain types of problems.

How do you implement the Taylor method for IVPs?

To implement the Taylor method for IVPs, you first need to determine the Taylor series expansion for the solution. This involves finding the derivatives of the solution at the initial point. Then, you can use the expansion to approximate the solution at a specific point in the domain by choosing an appropriate step size and number of terms in the series.

How do you assess the accuracy of the Taylor method for IVPs?

The accuracy of the Taylor method for IVPs can be assessed by comparing the approximate solution to the exact solution, if known. Another way is to use the global truncation error, which measures the difference between the exact solution and the approximate solution over the entire domain. Additionally, the local truncation error can be used to evaluate the accuracy at a specific point in the domain.

Similar threads

Replies
1
Views
1K
Replies
22
Views
439
  • Differential Equations
Replies
4
Views
1K
Replies
5
Views
365
Replies
1
Views
912
  • Calculus and Beyond Homework Help
Replies
4
Views
681
  • Calculus and Beyond Homework Help
Replies
5
Views
612
  • Calculus and Beyond Homework Help
Replies
5
Views
754
  • Calculus and Beyond Homework Help
Replies
6
Views
839
  • Calculus and Beyond Homework Help
Replies
2
Views
452
Back
Top