# Derivation of newtons equation of motion

1. Jan 31, 2012

### physical101

1. The problem statement, all variables and given/known data

Hi there, I have been trying to use the Kalman filter to predict the location of some missing data with an 4D volume. I have coded it and got it to work but only because of the substational amount of literature which is availlable on the subject. What I am really confused with is the background of the filter.

Basically the filter uses the equations of motion to predict the location of missing objects. The details of the algorithm arent that important at the momemt, mainly because I have got it working.

The problem I have is the derivation for the equation of motion everyone uses in their literature.

I just cant get it right, well what the paper has published anyway.

2. Relevant equations

The new position, xk+1, is given by:

xk+1= xk + Δt dx/dt + 1/2 Δt2d2x/dt2

The first derivative of this function is reported as being

dxk+1/dt=dx/dt + Δt d2x/dt2

3. The attempt at a solution

I cant see for the life of me how they got this. I have a solution of my own but it is massive because I assumed

1) You could split the function up as it is additive

2) I took the derative of x

3) Then used the product rule on the rest of the function

I obviously have got this seriously wrong, the paper is here at:

https://extranet.cranfield.ac.uk/stamp/,DanaInfo=ieeexplore.ieee.org+stamp.jsp?tp=&arnumber=1213548

Any help would be great

2. Jan 31, 2012

### lanedance

I can't see that paper as it needs passwords

It looks like just a taylor series approximation to x, given x_k
$$x_{k+1} = x(t_{k+1} ) \approx = x_k + \frac{dx(t)}{dt}|_{t=t_k}(t_{k+1} -t_k)+\frac{d^2x(t)}{dt^2} |_{t=t_k}(t_{k+1}-t_k)^2+.. = x_k + \frac{dx(t_k)}{dt}\Delta t+\frac{d^2x(t_k)}{dt^2}\Delta t^2+..$$

Then with a little rearrangement, and taking the approximation
$$\Delta x_{k+1} =x_{k+1}- x_k \approx (\frac{dx(t_k)}{dt}+\frac{d^2x(t_k)}{dt^2}\Delta t)\Delta t$$

which should get you pretty close...

would be interested to see the paper if you have a reference

3. Jan 31, 2012

### Dick

That's not really an 'equation of motion'. It's a predictor-corrector expression for the time evolution. You could just use $x(t+\Delta t)=x(t)+\Delta t v(t)$ where velocity $v=\frac{dx}{dt}$. But you would find that form accumulates error quickly. The reason is that you are using the velocity at time t to predict the whole motion from time $t$ to $t+\Delta t$. It would be much more accurate if you could use an estimate for velocity at the midpoint $t+\frac{\Delta t}{2}$. They do this by estimating $v(t+\Delta t)=v(t)+\Delta t a(t)$ where the acceleration is $a=\frac{d^2 x}{d t^2}$. They then estimate $v(t+\frac{\Delta t}{2})$ by the average $\frac{v(t) + v(t+\Delta t)}{2}$.

That's where your extra term is coming from in the evolution expression. It's not any kind of fundamental change in the equations of motion. It's a numerical analysis trick to improve the accuracy of the estimates.

4. Jan 31, 2012

### lanedance

5. Jan 31, 2012

### Dick

Thanks, lanedance. That sort of heuristic argument is how I learned it. But just saying it works out to the first few terms of the taylor series also makes a lot of sense. Never quite thought of it that way though.

6. Feb 1, 2012

7. Feb 1, 2012

### D H

Staff Emeritus
It's not quite verlet, not quite leapfrog. It looks like a bit of ad hockery that only works when your state comprises 2N elements, N of which are zeroth derivatives and the other N of which are first derivatives. It also looks like this is not a stable integration technique.

Then again, basic Euler isn't a stable integration, either. The standard formulation for the Kalman filter just uses basic Euler integration for the prediction step. I've never seen a "real" Kalman filter that doesn't resort to ad hockery somewhere.