Is this simple leapfrog method published?

  • Thread starter Thread starter eztum
  • Start date Start date
  • Tags Tags
    Method
Click For Summary
The discussion centers on a modified leapfrog method for solving ordinary differential equations (ODEs), which is noted for requiring only one evaluation of the ODE's right-hand side per step and achieving second-order accuracy. Participants question whether this method is published or recognized in the literature, with some suggesting it resembles existing predictor-corrector methods. The method is acknowledged for its simplicity and robustness, though concerns are raised about its conditional stability under certain conditions. Additionally, the importance of conserving physical quantities in simulations is emphasized, highlighting the method's potential applications in fields like quantum mechanics. Overall, while the method is interesting, its novelty and impact on numerical analysis are debated.
eztum
Messages
14
Reaction score
0
Consider an ODE of general form:
\frac{d}{dt}y(t) = f(y(t),t)
with an initial condition
y(t_0) = y_0.
We create an approximate solution by an intitialization step
v := f(y_0,t_0)
t:= t_0
y:= y_0
and iteration of the following modified leapfrog step
\tau := h/2
t += \tau
y += \tau v
v = 2 f(y,t) -v
y += \tau v
t += \tau
which increases t by h.
Thus one has to evaluate the right-hand side of the differential equation only
once per step. The method turns out to be second order and very robust.

Did anybody find this method mentioned in the literature or is it even well known
in some communities? Please tell me.

More on the method can be found in
http://www.ma.utexas.edu/mp_arc/c/08/08-197.pdf
 
Last edited:
Physics news on Phys.org
If you apply it to a standard test equation like
y' = ay, y(0) = 1

It gives y(h) = 1 + ah + (ah)^2/2

Which as you say is second order, but not exactly a "new method". As for "robust", it's only conditionally stable if a < 0.

It's a neat way to do the calculations, but I don't think it's going to start a revolution in numerical analysis.
 
Thanks for your post.
Could you please make your statement
AlephZero said:
... is ... not exactly a "new method"
more explicit?

The method under consideration is an explicit method that needs just one evaluation of the ODE's right-hand side per step like Euler) and is second order (unlike Euler, which is first order only).
From step to step you can change the time-increment (also its sign!) without additional computational burden (especially n o new initialization step).

Are you aware of any documented method (as an example in a book,or in the original iterature) that shares this property?

If so, is this method of comparable simplicity?

If not, what then is the rationale behind your statement?
 
I don't know if that specific method is published and/or attributed to anybody, but it's essentially one of general family of predictor-corrector methods:

For some constants \alpha and \beta, find
y_p = y_0 + \alpha h v_0
v_p = f(y_p,\beta h)
and then find y_h and v_h by a suitable function of y_0, v_0, y_p, v_p, and h.

I can't give you a reference to this off the top of my head, but I would be amazed if I'm the only person who has thought about the problem this way over the last few decades.

Certainly \alpha = \beta = 1 gives a well known variation on Euler's method, though it's more usual to calculate v_0 direct from the ODE at each step rather than find it by finite differences, and doing that gives 2 function evaluations per step rather than 1.

I'm not very interested in getting into a p*ssing competition about "best" methods, but IMO the time saved by using an unconditionally stable method (and hence longer timesteps) far outweighs the number of function evaluations per step.

This is a specialization of a general study of solution schemes for 2nd-order ODEs by Thomas & Gladwell to 1st-order ODEs:
http://www.newcastle.edu.au/Resources/Research%20Centres/CGMM/Publications/Scott%20Sloan/Truncation%20error%20and%20stability%20analysis%20of%20iterative%20and%20non-itreative%20Thomas-Gladwell%20methods%20for%20first-order%20non-linear%20differential%20equations.pdf
 
Last edited by a moderator:
Dear \aleph_0,
also I was not interested in whatever kind of competition about "best" methods. My only desire was to get help in locating the proposed method within the spectrum of published or of common-place methods. Your localization in a family of predictor-corrector methods looks not very natural to me since the proposed integrator is a product of three natural substeps ('free motion','kick',free motion') and not of two.

For the kind of applications I'm primarily interested in, it is most important that conserved quantities such as total energy and total angular momentum, or norm of the wavefunction in quantum mechanics, are conserved in simulations to reasonable accuracy and don't explode in the long run. From a physical point of view it is nearly evident that such a behavior can be achieved only for suitably limited time-steps. The book by Leimkuhler and Reich on 'Simulating Hamiltonian Dynamics' describes a similar view.

Thanks for your work!
 

Similar threads

  • · Replies 7 ·
Replies
7
Views
4K
  • · Replies 1 ·
Replies
1
Views
3K
  • · Replies 3 ·
Replies
3
Views
2K
  • · Replies 19 ·
Replies
19
Views
3K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 2 ·
Replies
2
Views
3K
Replies
5
Views
2K
  • · Replies 15 ·
Replies
15
Views
32K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 15 ·
Replies
15
Views
34K