haruspex said:
I do not see how that is inherently better in general. It merely approximates the average value of the angular velocity over the interval as being the value at the end of the interval instead of that at the start of the interval. A more convincing tweak would be to use the average of those two values, even though that might happen to give an inferior result in the pendulum case, as shown below.
The mean value theorem says that for every smooth function ##f(x)## and and every interval ##[a,b]## over which ##f(x)## is defined, there exists some ##c \in [a,b]## such that ##f(b)= f(a) + (b-a)\,f'(c)## . In other words, there exists some magical intermediate point such that the derivative at that point can be used to exactly compute the next value in the sequence ##\{(t_0, f(t_0)),\, (t_1,f(t_1)),\,\cdots\}##. The trick is to find that magical point, or equivalently to find the magical derivative value.
haruspex said:
The reason the change as you suggest does produce a big improvement in the pendulum case is quite subtle.
It happens that for a pendulum ##\theta, \alpha## always have opposite signs and ##\omega, \dot\alpha## likewise. As a result, each step of the post #1 algorithm overestimates gains in both KE and GPE and underestimates losses. Using the angular velocity at the end of the current step to estimate the angle change switches that around for GPE, so now the two errors tend to cancel.
We should not expect that benefit to carry over to differential equation modelling in general.
It's not subtle because of this behavior. It is even more subtle than you think, and to address your final line, it most definitely carries over to many other places where numerical integration of a differential equation is computed. The underlying ODE in this case is a second order ODE. Rewriting a system of ##N## second order ODEs (e.g., one per axis) to a system of ##2N## first order ODEs is possible, but doing so throws out geometry. That a plane pendulum is a system of second order ODEs is one aspect of "geometry". Another aspect of "geometry" is that an undamped plane pendulum can be expressed as a Lagrangian or Hamiltonian. Sympectic Euler is the simplest symplectic integrator that does not throw out geometry with the bath water. A symplectic integrator conserves a very close analog to energy in situations where energy is conserved.
An even better approach is the leapfrog technique, which can be written as
\begin{aligned}<br />
a_i &= f(t_i, x_i, v_i) \\<br />
v_{i+1/2} &= v_i + \frac12 a_i \Delta t \\<br />
x_{i+1} &= x_i + v_{i+1/2} \Delta t \\<br />
t_{i+1} &= t_i + \Delta t \\<br />
a_{i+1} &= f(t_{i+1}, x_{i+1}, v_{i+1}) \\<br />
v_{i+1} &= v_i + \frac12 a_{i+1} \Delta t<br />
\end{aligned} where ##a_i## is the acceleration at time ##t_i## calculated by some derivative function ##f(t_i, x_i, v_i)##.
Note that
- If the derivative function ##f(t,x,v)## does not depend on velocity then it only needs to be computed once for the initial sequence and then only once for each subsequent sequence as the penultimate step of the prior sequence already computed ##a_{i+1}##. This is the case for the drag-free pendulum.
- If the derivative function does not depend on velocity or time then energy is a conserved quantity . This is where symplectic integrators shine because their derivations explicitly conserve a quantity that is a very close analog of energy. Even when this is assumption only approximately true (e.g., small perturbations based on time and/or velocity), a symplectic integrator will still tend to perform much better than a similar non-symplectic integrator. There are plenty of places on physics where energy is close to a conserved quantity.
- If on the other hand, the derivative function does depend on velocity, the penultimate step (##a_{i+1} = f(t_{i+1} + x_{i+1}, v_{i+1})##) cannot be computed without an estimate for ##v_{i+1}##. This makes the last two steps implicit. There are many ways to deal with this.