Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

Is this simple leapfrog method published?

  1. Nov 11, 2011 #1
    Consider an ODE of general form:
    [itex]\frac{d}{dt}y(t) = f(y(t),t)[/itex]
    with an initial condition
    [itex]y(t_0) = y_0 [/itex].
    We create an approximate solution by an intitialization step
    [itex] v := f(y_0,t_0) [/itex]
    [itex] t:= t_0 [/itex]
    [itex] y:= y_0 [/itex]
    and iteration of the following modified leapfrog step
    [itex] \tau := h/2 [/itex]
    [itex] t += \tau [/itex]
    [itex] y += \tau v [/itex]
    [itex] v = 2 f(y,t) -v[/itex]
    [itex] y += \tau v [/itex]
    [itex] t += \tau [/itex]
    which increases t by h.
    Thus one has to evaluate the right-hand side of the differential equation only
    once per step. The method turns out to be second order and very robust.

    Did anybody find this method mentioned in the literature or is it even well known
    in some communities? Please tell me.

    More on the method can be found in
    http://www.ma.utexas.edu/mp_arc/c/08/08-197.pdf
     
    Last edited: Nov 11, 2011
  2. jcsd
  3. Nov 11, 2011 #2

    AlephZero

    User Avatar
    Science Advisor
    Homework Helper

    If you apply it to a standard test equation like
    [itex]y' = ay[/itex], [itex]y(0) = 1[/itex]

    It gives [itex]y(h) = 1 + ah + (ah)^2/2[/itex]

    Which as you say is second order, but not exactly a "new method". As for "robust", it's only conditionally stable if a < 0.

    It's a neat way to do the calculations, but I don't think it's going to start a revolution in numerical analysis.
     
  4. Nov 13, 2011 #3
    Thanks for your post.
    Could you please make your statement
    more explicit?

    The method under consideration is an explicit method that needs just one evaluation of the ODE's right-hand side per step like Euler) and is second order (unlike Euler, which is first order only).
    From step to step you can change the time-increment (also its sign!) without additional computational burden (especially n o new initialization step).

    Are you aware of any documented method (as an example in a book,or in the original iterature) that shares this property?

    If so, is this method of comparable simplicity?

    If not, what then is the rationale behind your statement?
     
  5. Nov 13, 2011 #4

    AlephZero

    User Avatar
    Science Advisor
    Homework Helper

    I don't know if that specific method is published and/or attributed to anybody, but it's essentially one of general family of predictor-corrector methods:

    For some constants [itex]\alpha[/itex] and [itex]\beta[/itex], find
    [itex]y_p = y_0 + \alpha h v_0[/itex]
    [itex]v_p = f(y_p,\beta h)[/itex]
    and then find [itex]y_h[/itex] and [itex]v_h[/itex] by a suitable function of [itex]y_0[/itex], [itex]v_0[/itex], [itex]y_p[/itex], [itex]v_p[/itex], and [itex]h[/itex].

    I can't give you a reference to this off the top of my head, but I would be amazed if I'm the only person who has thought about the problem this way over the last few decades.

    Certainly [itex]\alpha = \beta = 1[/itex] gives a well known variation on Euler's method, though it's more usual to calculate [itex]v_0[/itex] direct from the ODE at each step rather than find it by finite differences, and doing that gives 2 function evaluations per step rather than 1.

    I'm not very interested in getting into a p*ssing competition about "best" methods, but IMO the time saved by using an unconditionally stable method (and hence longer timesteps) far outweighs the number of function evaluations per step.

    This is a specialization of a general study of solution schemes for 2nd-order ODEs by Thomas & Gladwell to 1st-order ODEs:
    http://www.newcastle.edu.au/Resources/Research%20Centres/CGMM/Publications/Scott%20Sloan/Truncation%20error%20and%20stability%20analysis%20of%20iterative%20and%20non-itreative%20Thomas-Gladwell%20methods%20for%20first-order%20non-linear%20differential%20equations.pdf [Broken]
     
    Last edited by a moderator: May 5, 2017
  6. Nov 14, 2011 #5
    Dear [itex]\aleph_0[/itex],
    also I was not interested in whatever kind of competition about "best" methods. My only desire was to get help in locating the proposed method within the spectrum of published or of common-place methods. Your localization in a family of predictor-corrector methods looks not very natural to me since the proposed integrator is a product of three natural substeps ('free motion','kick',free motion') and not of two.

    For the kind of applications I'm primarily interested in, it is most important that conserved quantities such as total energy and total angular momentum, or norm of the wavefunction in quantum mechanics, are conserved in simulations to reasonable accuracy and don't explode in the long run. From a physical point of view it is nearly evident that such a behavior can be achieved only for suitably limited time-steps. The book by Leimkuhler and Reich on 'Simulating Hamiltonian Dynamics' describes a similar view.

    Thanks for your work!
     
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook




Similar Discussions: Is this simple leapfrog method published?
  1. Frobenius Method (Replies: 4)

  2. Euler's Method (Replies: 3)

  3. Frobenius Method (Replies: 3)

Loading...