Derivation of Newtons equation of motion

physical101
Messages
41
Reaction score
0

Homework Statement



Hi there, I have been trying to use the Kalman filter to predict the location of some missing data with an 4D volume. I have coded it and got it to work but only because of the substational amount of literature which is availlable on the subject. What I am really confused with is the background of the filter.

Basically the filter uses the equations of motion to predict the location of missing objects. The details of the algorithm arent that important at the momemt, mainly because I have got it working.

The problem I have is the derivation for the equation of motion everyone uses in their literature.

I just can't get it right, well what the paper has published anyway.


Homework Equations



The new position, xk+1, is given by:

xk+1= xk + Δt dx/dt + 1/2 Δt2d2x/dt2

The first derivative of this function is reported as being

dxk+1/dt=dx/dt + Δt d2x/dt2



The Attempt at a Solution



I can't see for the life of me how they got this. I have a solution of my own but it is massive because I assumed

1) You could split the function up as it is additive

2) I took the derative of x

3) Then used the product rule on the rest of the function

I obviously have got this seriously wrong, the paper is here at:

https://extranet.cranfield.ac.uk/stamp/,DanaInfo=ieeexplore.ieee.org+stamp.jsp?tp=&arnumber=1213548


Any help would be great
 
Physics news on Phys.org
I can't see that paper as it needs passwords

It looks like just a taylor series approximation to x, given x_k
x_{k+1} =<br /> x(t_{k+1} ) \approx = x_k + \frac{dx(t)}{dt}|_{t=t_k}(t_{k+1} -t_k)+\frac{d^2x(t)}{dt^2} |_{t=t_k}(t_{k+1}-t_k)^2+..<br /> = x_k + \frac{dx(t_k)}{dt}\Delta t+\frac{d^2x(t_k)}{dt^2}\Delta t^2+..

Then with a little rearrangement, and taking the approximation
\Delta x_{k+1} =x_{k+1}- x_k<br /> \approx (\frac{dx(t_k)}{dt}+\frac{d^2x(t_k)}{dt^2}\Delta t)\Delta t

which should get you pretty close...

would be interested to see the paper if you have a reference
 
That's not really an 'equation of motion'. It's a predictor-corrector expression for the time evolution. You could just use x(t+\Delta t)=x(t)+\Delta t v(t) where velocity v=\frac{dx}{dt}. But you would find that form accumulates error quickly. The reason is that you are using the velocity at time t to predict the whole motion from time t to t+\Delta t. It would be much more accurate if you could use an estimate for velocity at the midpoint t+\frac{\Delta t}{2}. They do this by estimating v(t+\Delta t)=v(t)+\Delta t a(t) where the acceleration is a=\frac{d^2 x}{d t^2}. They then estimate v(t+\frac{\Delta t}{2}) by the average \frac{v(t) + v(t+\Delta t)}{2}.

That's where your extra term is coming from in the evolution expression. It's not any kind of fundamental change in the equations of motion. It's a numerical analysis trick to improve the accuracy of the estimates.
 
see I dropped that half as well, always a pleasure to read your comments dick
 
lanedance said:
see I dropped that half as well, always a pleasure to read your comments dick

Thanks, lanedance. That sort of heuristic argument is how I learned it. But just saying it works out to the first few terms of the taylor series also makes a lot of sense. Never quite thought of it that way though.
 
Ahhh i remember this now, i think we used the same sort of stuff when I studied the verlet algorithm. As you can tell I was not top of my class, lol. I don't think I can post the original paper but this link tells you most things about the kalman filter:

http://blog.cordiner.net/2011/05/03/object-tracking-using-a-kalman-filter-matlab/
 
It's not quite verlet, not quite leapfrog. It looks like a bit of ad hockery that only works when your state comprises 2N elements, N of which are zeroth derivatives and the other N of which are first derivatives. It also looks like this is not a stable integration technique.

Then again, basic Euler isn't a stable integration, either. The standard formulation for the Kalman filter just uses basic Euler integration for the prediction step. I've never seen a "real" Kalman filter that doesn't resort to ad hockery somewhere.
 
Back
Top