- #1
bolbteppa
- 309
- 41
The explanation below illustrates why I think the method of successive approximations is merely a sneaky way of working with power series when you're not formally allowed to use a Taylor series expansion for a function (i.e. when it doesn't exist, as in proving the existence theorem on ode's for continuous functions). My question is asking somebody, who sees this as obvious, to give a nice explanation as to how to why the method of successive approximations is obviously just the above Taylor series argument in disguise, using either ode's or the implicit function theorem as a model to do it:
The inverse function theorem is usually proven using contraction mappings , Banach's fixed point theorem and successive approximation. From this, the implicit function theorem is derived, & the exact same proof method is used in proving the existence and uniqueness of solutions of ode's of the form [itex]y' = f(x,y)[/itex] when [itex]f[/itex] is continuous/Lipschitz. I've always found this method extremely unintuitive in practice.
Interestingly, the implicit function theorem can be proven for analytic functions (as in classical books), using a really intuitive Taylor series argument, which only amounts to expanding [itex]F(x,y) = 0[/itex] in a Taylor series
[tex]F(x,y) = a_{10}x + b_{01}y + ... = 0[/tex]
solving for [itex]y[/itex],
[tex]y = a^*_{10}x + a^*_{20}x^2 + a^*_{11}xy + ...[/tex]
assuming a solution of the form
[tex]y = c_1x + c_2x^2 + ...[/tex]
subbing it in, solving for the coefficients, then just showing that
[tex]y = c_1x + c_2x^2 + ... = a^*_{10}x + a^*_{20}x^2 + a^*_{11}xy + ... [/tex]
is bounded by a double geometric series which itself has a unique solution in the radius of convergence (). This is baby calculus stuff & fully rigorous (at least to me), thus the implicit and inverse function theorem have extremely intuitive proofs when you allow for Taylor series.
Amazingly, the *exact same proof* ([url=http://archive.org/stream/differentialequa028961mbp#page/n3/mode/2up]pages 45 - 58 if needed) is used to prove that solutions to an ode of the form [itex]y' = f(x,y)[/itex] exists when [itex]f[/itex] is analytic.
In other words, the method of successive approximations seems to merely be a surrogate for using power series arguments when no power series are directly defined, thus there must be a way to view the method of successive approximations as 'the next best thing' to invoking Taylor series, for instance you can't invoke it in the proof on ode's when [itex]f(x,y)[/itex] is continuous so successive approximations are used. Would someone mind illustrating why the method of successive approximations is obviously just the above Taylor series argument in disguise that works even in the case that [itex]f[/itex] is not analytic, merely just continuous? Thanks!
The inverse function theorem is usually proven using contraction mappings , Banach's fixed point theorem and successive approximation. From this, the implicit function theorem is derived, & the exact same proof method is used in proving the existence and uniqueness of solutions of ode's of the form [itex]y' = f(x,y)[/itex] when [itex]f[/itex] is continuous/Lipschitz. I've always found this method extremely unintuitive in practice.
Interestingly, the implicit function theorem can be proven for analytic functions (as in classical books), using a really intuitive Taylor series argument, which only amounts to expanding [itex]F(x,y) = 0[/itex] in a Taylor series
[tex]F(x,y) = a_{10}x + b_{01}y + ... = 0[/tex]
solving for [itex]y[/itex],
[tex]y = a^*_{10}x + a^*_{20}x^2 + a^*_{11}xy + ...[/tex]
assuming a solution of the form
[tex]y = c_1x + c_2x^2 + ...[/tex]
subbing it in, solving for the coefficients, then just showing that
[tex]y = c_1x + c_2x^2 + ... = a^*_{10}x + a^*_{20}x^2 + a^*_{11}xy + ... [/tex]
is bounded by a double geometric series which itself has a unique solution in the radius of convergence (). This is baby calculus stuff & fully rigorous (at least to me), thus the implicit and inverse function theorem have extremely intuitive proofs when you allow for Taylor series.
Amazingly, the *exact same proof* ([url=http://archive.org/stream/differentialequa028961mbp#page/n3/mode/2up]pages 45 - 58 if needed) is used to prove that solutions to an ode of the form [itex]y' = f(x,y)[/itex] exists when [itex]f[/itex] is analytic.
In other words, the method of successive approximations seems to merely be a surrogate for using power series arguments when no power series are directly defined, thus there must be a way to view the method of successive approximations as 'the next best thing' to invoking Taylor series, for instance you can't invoke it in the proof on ode's when [itex]f(x,y)[/itex] is continuous so successive approximations are used. Would someone mind illustrating why the method of successive approximations is obviously just the above Taylor series argument in disguise that works even in the case that [itex]f[/itex] is not analytic, merely just continuous? Thanks!