Picard method of succesive approximation

matematikuvol
Messages
190
Reaction score
0
y(t) = y0 +<br /> \int^{ t}_{t_0}<br /> f(s, y(s)) ds.

Picard’s method starts with the definition of what it means to be a solution: if you guess that a function
φ(t) is a solution, then you can check your guess by substituting it into the right-hand side of equation (2) and
comparing it to the left-hand side, which is simply φ(t) itself. The new idea is that the process of checking
each guess produces a new guess which, even if it is not the correct solution, is a better approximation
than the one you started with. In this way we obtain an iterative solution, with each new approximation
computed from the previous one by the right-hand side of equation (2). This should be reminiscent of
Newton’s method. In fact, the proof that Picard’s method produces a convergent sequence is similar to the
proof for Newton’s method.

How can I be sure that nth solution is better than n-1th solution. Is there some easy way to see this? Is there a case where this is incorrect?
 
Physics news on Phys.org
The point is that you can establish a uniformly convergent sequence U_n ( of functions ) that converges to a function U ( a differentiable function, actually ) . Since the sequence converges uniformly, by the properties of uniform convergence, we have
U_n = y0 + integral of f ( U_n-1 ) --> U = y0 + integral of f ( U ) ( uniform limits can be passed through integrals, and f is continuous, so the limit of the sequence in its arguments holds as such ). Now the limit function U is clearly a solution to the differential equation, just by virtue of checking. That is how the approximations work.
( and that they eventually converge to the solution is what it means to be "better" approximations each time. In other words, pick a large n, and || U_n - U || < (some small epsilon) ) For the existence and uniqueness theorem to work, f must be Lipschitz in X and continuous in t ( where f is of the form f( t, X ) ) ( I dropped the "time variable" t above, but it doesn't matter.
For example, a differential equation that doesn't satisfy the above requirements is X ' = X^(-1/3 ) with the initial condition y( 0 ) = 0. This system is not Lipschitz in X, and two solutions can be found for the system around 0 ( one can be found by separation of variables, and the other can be found by just setting y = the zero function )
 
Last edited:
And what is X?
 
matematikuvol said:
And what is X?

We can consider a differential equation/system as a function f with a domain D in R^(n+1) so that f ( t , y1( t ) , y2 ( t ) ,... , yn ( t ) ) = ( y1 ' ( t ) , y2 ' ( t ) , ... yn ' ( t ) ) = X ' where X = ( y1 ( t ) , y2 ( t ) , ... , yn ( t ) )
 
I'm not sure that each one is actually better than the last. Often, sequences converge, but they alternate between getting better and worse as they do so.

Out of laziness, I might be a little vague here, but if nothing else, this will convey the flavor of how I think about it.

I like to think of this as a fixed point problem.

You have a vector field that you want to integrate. You start out with some guess as to what the solution is, maybe a constant flow. Then you integrate all the vectors along that guess (sort of add them up). If the curve obtained by that construction is the curve itself, i.e., a fixed point, it solves the ODE because the tangent vector at each point agrees with the vector field.

Under the condition of the vector field being locally Lipschitz, this process yields a contraction mapping of the space of differentiable curves. That is, the curves get closer together by some fixed factor, C less than 1, each time (the distance between two curves here is defined by the maximum distant apart over all time t--sup norm). So, the first time, the curves can't move apart by more than C, then next time, no more than C^2, the next time, no more than C^3, and so on. Add all these up and you get a convergent series. The only way this could happen is for the curves to be converging to a fixed point (fixed point because it's the result of iterating a mapping from an appropriate set of curves to itself). And as we said, a fixed point solves the ODE.
 
Thread 'Direction Fields and Isoclines'
I sketched the isoclines for $$ m=-1,0,1,2 $$. Since both $$ \frac{dy}{dx} $$ and $$ D_{y} \frac{dy}{dx} $$ are continuous on the square region R defined by $$ -4\leq x \leq 4, -4 \leq y \leq 4 $$ the existence and uniqueness theorem guarantees that if we pick a point in the interior that lies on an isocline there will be a unique differentiable function (solution) passing through that point. I understand that a solution exists but I unsure how to actually sketch it. For example, consider a...
Back
Top