Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

Picard method of succesive approximation

  1. Jan 3, 2012 #1
    [tex]y(t) = y0 +
    \int^{ t}_{t_0}
    f(s, y(s)) ds.[/tex]

    Picard’s method starts with the definition of what it means to be a solution: if you guess that a function
    φ(t) is a solution, then you can check your guess by substituting it into the right-hand side of equation (2) and
    comparing it to the left-hand side, which is simply φ(t) itself. The new idea is that the process of checking
    each guess produces a new guess which, even if it is not the correct solution, is a better approximation
    than the one you started with. In this way we obtain an iterative solution, with each new approximation
    computed from the previous one by the right-hand side of equation (2). This should be reminiscent of
    Newton’s method. In fact, the proof that Picard’s method produces a convergent sequence is similar to the
    proof for Newton’s method.

    How can I be sure that nth solution is better than n-1th solution. Is there some easy way to see this? Is there a case where this is incorrect?
     
  2. jcsd
  3. Jan 3, 2012 #2
    The point is that you can establish a uniformly convergent sequence U_n ( of functions ) that converges to a function U ( a differentiable function, actually ) . Since the sequence converges uniformly, by the properties of uniform convergence, we have
    U_n = y0 + integral of f ( U_n-1 ) --> U = y0 + integral of f ( U ) ( uniform limits can be passed through integrals, and f is continuous, so the limit of the sequence in its arguments holds as such ). Now the limit function U is clearly a solution to the differential equation, just by virtue of checking. That is how the approximations work.
    ( and that they eventually converge to the solution is what it means to be "better" approximations each time. In other words, pick a large n, and || U_n - U || < (some small epsilon) )


    For the existence and uniqueness theorem to work, f must be Lipschitz in X and continuous in t ( where f is of the form f( t, X ) ) ( I dropped the "time variable" t above, but it doesn't matter.
    For example, a differential equation that doesn't satisfy the above requirements is X ' = X^(-1/3 ) with the initial condition y( 0 ) = 0. This system is not Lipschitz in X, and two solutions can be found for the system around 0 ( one can be found by separation of variables, and the other can be found by just setting y = the zero function )
     
    Last edited: Jan 3, 2012
  4. Jan 9, 2012 #3
    And what is [tex]X[/tex]?
     
  5. Jan 10, 2012 #4
    We can consider a differential equation/system as a function f with a domain D in R^(n+1) so that f ( t , y1( t ) , y2 ( t ) ,... , yn ( t ) ) = ( y1 ' ( t ) , y2 ' ( t ) , ... yn ' ( t ) ) = X ' where X = ( y1 ( t ) , y2 ( t ) , .... , yn ( t ) )
     
  6. Jan 12, 2012 #5
    I'm not sure that each one is actually better than the last. Often, sequences converge, but they alternate between getting better and worse as they do so.

    Out of laziness, I might be a little vague here, but if nothing else, this will convey the flavor of how I think about it.

    I like to think of this as a fixed point problem.

    You have a vector field that you want to integrate. You start out with some guess as to what the solution is, maybe a constant flow. Then you integrate all the vectors along that guess (sort of add them up). If the curve obtained by that construction is the curve itself, i.e., a fixed point, it solves the ODE because the tangent vector at each point agrees with the vector field.

    Under the condition of the vector field being locally Lipschitz, this process yields a contraction mapping of the space of differentiable curves. That is, the curves get closer together by some fixed factor, C less than 1, each time (the distance between two curves here is defined by the maximum distant apart over all time t--sup norm). So, the first time, the curves can't move apart by more than C, then next time, no more than C^2, the next time, no more than C^3, and so on. Add all these up and you get a convergent series. The only way this could happen is for the curves to be converging to a fixed point (fixed point because it's the result of iterating a mapping from an appropriate set of curves to itself). And as we said, a fixed point solves the ODE.
     
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook




Similar Discussions: Picard method of succesive approximation
  1. Picard iteration (Replies: 1)

  2. Picard's Iteration (Replies: 4)

  3. Picard iteration! (Replies: 5)

  4. Picard's Iteration (Replies: 7)

  5. Modified Picard method (Replies: 0)

Loading...