An issue with solving an IVP by Taylor Series

Click For Summary

Discussion Overview

The discussion revolves around the use of Taylor Series to solve an Initial Value Problem (IVP) defined by a differential equation and an initial condition. Participants explore the implications of analyticity of the function involved and the convergence properties of the Taylor Series, questioning the validity of the method and the relationship between the series and the actual solution of the IVP.

Discussion Character

  • Exploratory
  • Technical explanation
  • Debate/contested

Main Points Raised

  • One participant asserts that if the function f(x,y) is analytic at (x0,y0), then the Taylor Series can be constructed and has a positive radius of convergence.
  • Another participant questions how to ensure that the Taylor Series has a non-zero radius of convergence and whether it represents the actual solution to the IVP within that radius.
  • Concerns are raised about the possibility that the solution to the IVP may not be analytic at x0, despite being infinitely differentiable, which could affect the validity of the Taylor Series representation.
  • A later reply attempts to demonstrate that if the Taylor Series has a positive radius of convergence, it converges to an analytic function u(x) that satisfies the IVP in a neighborhood of x0.
  • This reply also discusses the differentiation of the Taylor Series term by term and how it relates to the original differential equation, suggesting that the series indeed represents the solution to the IVP under certain conditions.

Areas of Agreement / Disagreement

Participants express uncertainty regarding the convergence of the Taylor Series and its relationship to the solution of the IVP. While some propose that the series can represent the solution under specific conditions, others remain skeptical about the implications of analyticity and the uniqueness of the solution.

Contextual Notes

Limitations include the dependence on the analyticity of f(x,y) and the assumptions regarding the radius of convergence of the Taylor Series. The discussion does not resolve whether the solution to the IVP is necessarily analytic at x0.

BobbyBear
Messages
162
Reaction score
1
Okay so suppose I have the Initial Value Problem:

<br /> <br /> \left. \begin{array}{l}<br /> \frac {dy} {dx} = f(x,y) \\<br /> y( x_{0} ) = y_{0}<br /> \end{array} \right\} \mbox{IVP}<br /> <br />

NB. I am considering only real functions of real variables.

If f(x,y) is analytic at x0 and y0 then that means that we can construct its Taylor Series centered around the point (x0,y0) and that the Taylor Series will have a positive radius of convergence, and also that within this radius of convergence the function f(x,y) will equal its Taylor Series.

f(x,y) being analytic at x0 and y0 also means that the IVP has a unique solution in a neighbourhood of the point x0, as follows from The Existence and Uniqueness Theorem (Picard–Lindelöf). Let y(x) be the function that satisfies the IVP in a neighbourhood of x0.

The idea behind the Taylor Series method is to use the differential equation and initial condition to find the Taylor coefficients of y(x):

<br /> \frac {dy} {dx} = f(x,y) \; \; \rightarrow \; \;<br /> y&#039;(x_0)=f(x_0,y(x_0))=f(x_0,y_0)<br />

<br /> \frac {d^2y} {dx^2} = \frac {d} {dx} f(x,y)= \frac {\partial f} {\partial x} \frac {dx} {dx} \; + \; \frac {\partial f} {\partial y} \frac {dy} {dx} \\ <br /> \indent \rightarrow \; \; y&#039;&#039;(x_0)= \frac {\partial f} {\partial x}|_{(x_0,y(x_0))} \; + \; \frac {\partial f} {\partial y}|_{(x_0,y(x_0))}\cdot f(x_0,y_0) <br />

etc.


Obviously we can do this because if f is analytic, all the partial derivatives of f exist at (x0,y0), and by the relationship given by the ODE (which we know y(x) satisfies at least in a neighbourhood of x0), all the derivatives of y exist at x0, that is,
<br /> y(x) \in C^\infty _x (x_0)<br />

Okay, so we can construct the Taylor Series of y(x) at x0:
<br /> \sum \frac{y^{(n)}(x_0)}{n!}x^n<br />

But!
1)How do we know that this series has a non-zero radius of convergence?
2)And secondly, if it does have a positive radius of convergence, how do we know it is equal to the solution of the IVP within its radius of convergence? That is, obviously within its radius of convergence, the series represents a certain function which is analytic at x0, but how do we know that this function is indeed the function y(x) which is what we called the solution to the IVP? I mean, maybe the solution to the IVP is not analytic at x0 (even though it is infinitely derivable at x0) and thus its Taylor series (the one we constructed) does not represent it in any neighbourhood of x0. After all, the existence and uniqueness theorem does not require or state that the solution be analytic at all.
Um, if you can affirm that the Taylor Series we have constructed is indeed a solution to the IVP within its radius of convergence, then by the uniqueness of the solution, the function it represents must be the solution of the IVP, y(x). But can this be affirmed?

If anyone has any ideas please help, I'm just not happy with this method coz yes, I can calculate a number of terms of the Taylor Series to approximate the solution of the IVP, but I really have no assurance that the series I am constructing is indeed solution of the IVP (not to mention whether it even does converge at all for any x near x0).
 
Physics news on Phys.org
Sometimes I feel I speak in Martian, especially when my posts are long ones:P Did anyone actually get was I was on about? Just curious, as sometimes I myself don't lol :P
 
Yay I think I might be able to (partially) answer my own question xD

-gingerly tries to demostrate-

Let
<br /> <br /> T_{y,x_0} (x) = \sum_{n=0}^\infty \frac{y^{(n)}(x_0)}{n!}x^n<br /> <br />

be the Taylor series of y(x) that I've constructed using the initial condition and the relationship
given by the o.d.e. If it has a positive radius of convergence R>0 (unfortunately I'll only be able to know
this if I can deduce an expression for the general term of the series), then within its radius of convergence,
it converges to an analytic (at x0) function, let's say u(x):

u(x) = T_{y,x_0} (x) = \sum_{n=0}^\infty \frac{y^{(n)}(x_0)}{n!}x^n = T_{u,x_0} (x) = \sum_<br /> <br /> {n=0}^\infty \frac{u^{(n)}(x_0)}{n!}x^n \ \ \ \ \ \ \ \ \ \mbox{for} \ \ |x-x_0|&lt;R


And! And! If f(x,y) is analytic at (x0,y0), then

u(x) = T_{y,x_0} (x) = \sum_{n=0}^\infty \frac{y^{(n)}(x_0)}{n!}x^n

is solution to the I.V.P. in a certain neighbourhood of x0 ! Because, within the Radius of Convergence, you can
differentiate u(x) by differentiating each term of the series and what not :P, so:

u&#039;(x) = \frac {d} {dx} (T_{y,x_0} (x)) = \frac {dy} {dx}(x_0) + \frac {d^2y} {dx^2}(x_0) \cdot (x-x_0) + \frac {d^3y} {dx^3}(x_0) \cdot \frac {(x-x_0)^2}{2!} + ... \ \ \ \ \ \ \ \ \ \mbox{for} \ \ |x-x_0|&lt;R

and, as f(x,y) is analytic at (x0,y0), then in some neighbourhood of the point (x0,y0), let's say |x-x0|,|y-y0|< R* :

f(x,u) = f(x_0,y_0) \ + \ \frac {\partial f} {\partial x} (x_0,y_0) \cdot (x-x_0) \ + \ \frac {\partial f} {\partial u} (x_0,y_0)\cdot (u-y_0) +
\ \ \ \frac {1}{2!} \left[ \frac {\partial ^2f} {\partial x^2}(x_0,y_0) \cdot (x-x_0)^2 + \ 2 \frac {\partial^2 f} {\partial x \partial u } (x_0,y_0) \cdot (x-x_0)(u-y_0) + \frac {\partial ^2f} {\partial u^2}(x_0,y_0) \cdot (u-y_0)^2 \right] + ... \ \ \ \ \mbox{for} \ \ |x-x_0|,|u-y_0| &lt;R^*

= f(x_0,y_0) \ \ + \ \ \frac {\partial f} {\partial x} (x_0,y_0) \cdot (x-x_0) + \frac {\partial f} {\partial u} (x_0,y_0)\cdot \left( \frac {du} {dx}(x_0) \cdot (x-x_0) + \frac {1}{2!} \frac {d^2u} {dx^2}(x_0) \cdot (x-x_0)^2 + ...\right) +
\frac {1}{2!} \left[ \frac {\partial ^2f} {\partial x^2}(x_0,y_0) \cdot (x-x_0)^2 + 2 \frac {\partial^2 f} {\partial x \partial u } (x_0,y_0) \cdot (x-x_0)\left( \frac {du} {dx}(x_0) \cdot (x-x_0) + \frac {1}{2!} \frac {d^2u} {dx^2}(x_0) \cdot (x-x_0)^2 + ...\right)+
\left. \frac {\partial ^2f} {\partial u^2}(x_0,y_0) \cdot \left( \frac {du} {dx}(x_0) \cdot (x-x_0) + \frac {1}{2!} \frac {d^2u} {dx^2}(x_0) \cdot (x-x_0)^2 + ...\right)^2<br /> \right] + ... \ \ \ \ \ \ \ \ \ \mbox{for} \ \ |x-x_0|&lt; min(R, R^*)


= f(x_0,y_0) \ \ + \ \ \left[ \frac {\partial f} {\partial x} (x_0,y_0)+ \frac {\partial f} {\partial u} (x_0,y_0)\cdot\frac {du} {dx}(x_0)\right]\cdot (x-x_0) \ \ +
\frac {1}{2!} \left[ \frac {\partial f} {\partial u}(x_0,y_0) \cdot\frac {d^2u} {dx^2}(x_0) + \frac {\partial ^2f} {\partial x^2}(x_0,y_0) + 2 \frac {\partial^2 f} {\partial x \partial u } (x_0,y_0) \frac {du} {dx}(x_0) + \frac {\partial ^2f} {\partial u^2}(x_0,y_0)\left(\frac {du} {dx}(x_0)\right)^2 \right] \cdot(x-x_0)^2 + ... \ \ \ \ \ \ \ \ \ \mbox{for} \ \ |x-x_0|&lt; min(R, R^*)


= f(x_0,y_0) \ \ + \ \ \frac {df} {dx}(x_0)\cdot(x-x_0) \ \ + \ \ \frac {d^2f} {dx^2}(x_0)\cdot\frac {(x-x_0)^2}{2!} \ \ + ... \ \ \ \ \ \ \ \ \ \mbox{for} \ \ |x-x_0|&lt; min(R, R^*)

(which, by construction of y(n)(x0) )

= \frac {dy} {dx}(x_0) + \frac {d^2y} {dx^2}(x_0) \cdot (x-x_0) + \frac {d^3y} {dx^3}(x_0) \cdot \frac {(x-x_0)^2}{2!} + ... \ \ = \ \ \frac {d} {dx} (T_{y,x_0} (x)) \ \ = \ \ u&#039;(x)

that is,

T_{y,x_0} (x) \ = \ u(x)

satisfies the o.d.e. for |x-x0|< min(R,R*), and it evidently satisfies the initial condition as well, so it is a solution to the I.V.P, and by the uniqueness of the solution, it is the solution to the I.V.P: u(x)=y(x) which also means that the solution to the I.V.P. is also analytic at x0.

So, in summary, if f is analytic at (x0,y0), then provided the Taylor Series of the solution to the I.V.P. y(x) has positive radius of convergence R, it represents the solution to the I.V.P within a neighbourhood of x0, as I have (I hope:P) shown.

I still would like to know whether there is any result that can affirm that if f is analytic at (x0,y0) (ie. there exists a neighbourhood of (x0,y0), R*, in which the Taylor Series of f represents f), then the solution is analytic too in a certain neighbourhood R of x0. Is there a relationship between R* and R?

The issue of the radius of convergence remains an important aspect, I think, because if there's no guarantee that the Taylor series of the solution that one constructs actually converges, then how can one use a finite No. of terms of the series as an approximation to the solution? In general one won't be able to deduce an expression for the general term of the series to be able to determine its radius of convergence:S

Any comments/refutations/insights or the like would be welcome xD
Thank you for your time :P
 
John Creighto said:
Wikipedia says if the a function is analytic then it is locally given by a convergent power series.

http://en.wikipedia.org/wiki/Analytic_function

Fanku, John :)
I've always gone by that definition of analicity (for real functions anyhow, which are what concern me). That is why I equated f(x,u) to its power series (Taylor series) under the assumption that it was analytic xD.
 

Similar threads

  • · Replies 5 ·
Replies
5
Views
3K
  • · Replies 0 ·
Replies
0
Views
4K
  • · Replies 5 ·
Replies
5
Views
2K
Replies
1
Views
2K
  • · Replies 2 ·
Replies
2
Views
5K
  • · Replies 1 ·
Replies
1
Views
1K
  • · Replies 1 ·
Replies
1
Views
1K
  • · Replies 8 ·
Replies
8
Views
5K
  • · Replies 12 ·
Replies
12
Views
3K
  • · Replies 2 ·
Replies
2
Views
2K