# Is this simple leapfrog method published?

Consider an ODE of general form:
$\frac{d}{dt}y(t) = f(y(t),t)$
with an initial condition
$y(t_0) = y_0$.
We create an approximate solution by an intitialization step
$v := f(y_0,t_0)$
$t:= t_0$
$y:= y_0$
and iteration of the following modified leapfrog step
$\tau := h/2$
$t += \tau$
$y += \tau v$
$v = 2 f(y,t) -v$
$y += \tau v$
$t += \tau$
which increases t by h.
Thus one has to evaluate the right-hand side of the differential equation only
once per step. The method turns out to be second order and very robust.

Did anybody find this method mentioned in the literature or is it even well known
in some communities? Please tell me.

More on the method can be found in
http://www.ma.utexas.edu/mp_arc/c/08/08-197.pdf

Last edited:

AlephZero
Homework Helper
If you apply it to a standard test equation like
$y' = ay$, $y(0) = 1$

It gives $y(h) = 1 + ah + (ah)^2/2$

Which as you say is second order, but not exactly a "new method". As for "robust", it's only conditionally stable if a < 0.

It's a neat way to do the calculations, but I don't think it's going to start a revolution in numerical analysis.

... is ... not exactly a "new method"
more explicit?

The method under consideration is an explicit method that needs just one evaluation of the ODE's right-hand side per step like Euler) and is second order (unlike Euler, which is first order only).
From step to step you can change the time-increment (also its sign!) without additional computational burden (especially n o new initialization step).

Are you aware of any documented method (as an example in a book,or in the original iterature) that shares this property?

If so, is this method of comparable simplicity?

If not, what then is the rationale behind your statement?

AlephZero
Homework Helper
I don't know if that specific method is published and/or attributed to anybody, but it's essentially one of general family of predictor-corrector methods:

For some constants $\alpha$ and $\beta$, find
$y_p = y_0 + \alpha h v_0$
$v_p = f(y_p,\beta h)$
and then find $y_h$ and $v_h$ by a suitable function of $y_0$, $v_0$, $y_p$, $v_p$, and $h$.

I can't give you a reference to this off the top of my head, but I would be amazed if I'm the only person who has thought about the problem this way over the last few decades.

Certainly $\alpha = \beta = 1$ gives a well known variation on Euler's method, though it's more usual to calculate $v_0$ direct from the ODE at each step rather than find it by finite differences, and doing that gives 2 function evaluations per step rather than 1.

I'm not very interested in getting into a p*ssing competition about "best" methods, but IMO the time saved by using an unconditionally stable method (and hence longer timesteps) far outweighs the number of function evaluations per step.

This is a specialization of a general study of solution schemes for 2nd-order ODEs by Thomas & Gladwell to 1st-order ODEs:
Dear $\aleph_0$,