1. Limited time only! Sign up for a free 30min personal tutor trial with Chegg Tutors
    Dismiss Notice
Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

Derivation of newtons equation of motion

  1. Jan 31, 2012 #1
    1. The problem statement, all variables and given/known data

    Hi there, I have been trying to use the Kalman filter to predict the location of some missing data with an 4D volume. I have coded it and got it to work but only because of the substational amount of literature which is availlable on the subject. What I am really confused with is the background of the filter.

    Basically the filter uses the equations of motion to predict the location of missing objects. The details of the algorithm arent that important at the momemt, mainly because I have got it working.

    The problem I have is the derivation for the equation of motion everyone uses in their literature.

    I just cant get it right, well what the paper has published anyway.


    2. Relevant equations

    The new position, xk+1, is given by:

    xk+1= xk + Δt dx/dt + 1/2 Δt2d2x/dt2

    The first derivative of this function is reported as being

    dxk+1/dt=dx/dt + Δt d2x/dt2



    3. The attempt at a solution

    I cant see for the life of me how they got this. I have a solution of my own but it is massive because I assumed

    1) You could split the function up as it is additive

    2) I took the derative of x

    3) Then used the product rule on the rest of the function

    I obviously have got this seriously wrong, the paper is here at:

    https://extranet.cranfield.ac.uk/stamp/,DanaInfo=ieeexplore.ieee.org+stamp.jsp?tp=&arnumber=1213548


    Any help would be great
     
  2. jcsd
  3. Jan 31, 2012 #2

    lanedance

    User Avatar
    Homework Helper

    I can't see that paper as it needs passwords

    It looks like just a taylor series approximation to x, given x_k
    [tex]x_{k+1} =
    x(t_{k+1} ) \approx = x_k + \frac{dx(t)}{dt}|_{t=t_k}(t_{k+1} -t_k)+\frac{d^2x(t)}{dt^2} |_{t=t_k}(t_{k+1}-t_k)^2+..
    = x_k + \frac{dx(t_k)}{dt}\Delta t+\frac{d^2x(t_k)}{dt^2}\Delta t^2+..[/tex]

    Then with a little rearrangement, and taking the approximation
    [tex]\Delta x_{k+1} =x_{k+1}- x_k
    \approx (\frac{dx(t_k)}{dt}+\frac{d^2x(t_k)}{dt^2}\Delta t)\Delta t[/tex]

    which should get you pretty close...

    would be interested to see the paper if you have a reference
     
  4. Jan 31, 2012 #3

    Dick

    User Avatar
    Science Advisor
    Homework Helper

    That's not really an 'equation of motion'. It's a predictor-corrector expression for the time evolution. You could just use [itex]x(t+\Delta t)=x(t)+\Delta t v(t)[/itex] where velocity [itex]v=\frac{dx}{dt}[/itex]. But you would find that form accumulates error quickly. The reason is that you are using the velocity at time t to predict the whole motion from time [itex]t[/itex] to [itex]t+\Delta t[/itex]. It would be much more accurate if you could use an estimate for velocity at the midpoint [itex]t+\frac{\Delta t}{2}[/itex]. They do this by estimating [itex]v(t+\Delta t)=v(t)+\Delta t a(t)[/itex] where the acceleration is [itex]a=\frac{d^2 x}{d t^2}[/itex]. They then estimate [itex]v(t+\frac{\Delta t}{2})[/itex] by the average [itex]\frac{v(t) + v(t+\Delta t)}{2}[/itex].

    That's where your extra term is coming from in the evolution expression. It's not any kind of fundamental change in the equations of motion. It's a numerical analysis trick to improve the accuracy of the estimates.
     
  5. Jan 31, 2012 #4

    lanedance

    User Avatar
    Homework Helper

    see I dropped that half as well, always a pleasure to read your comments dick
     
  6. Jan 31, 2012 #5

    Dick

    User Avatar
    Science Advisor
    Homework Helper

    Thanks, lanedance. That sort of heuristic argument is how I learned it. But just saying it works out to the first few terms of the taylor series also makes a lot of sense. Never quite thought of it that way though.
     
  7. Feb 1, 2012 #6
  8. Feb 1, 2012 #7

    D H

    User Avatar
    Staff Emeritus
    Science Advisor

    It's not quite verlet, not quite leapfrog. It looks like a bit of ad hockery that only works when your state comprises 2N elements, N of which are zeroth derivatives and the other N of which are first derivatives. It also looks like this is not a stable integration technique.

    Then again, basic Euler isn't a stable integration, either. The standard formulation for the Kalman filter just uses basic Euler integration for the prediction step. I've never seen a "real" Kalman filter that doesn't resort to ad hockery somewhere.
     
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook