1. Limited time only! Sign up for a free 30min personal tutor trial with Chegg Tutors
    Dismiss Notice
Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

How does the Kalman filter calculate derivatives?

  1. Jul 22, 2015 #1
    Suppose we have a Kalman filter. We have a position sensor, for example GPS. We use the filter to estimate position. However in all examples I see higher derivatives in the state vector: speed, acceleration and sometimes jerk. There is no sensor that calculates these values directly, so they must be somehow calculated from the GPS readings.

    There is a prediction matrix but it only tells us how to integrate, not diffrerentiate. The sample matrix for my class of filters looks like that:

    \begin{array}{ccc} 1 & dt & dt^2/2 \\ 0 & 1 & dt \\ 0 & 0 & 1 \end{array}

    I know it can update a parameter given all higher derivatives, i.e. position from speed. But how can it compute speed from position?

    The only place where the differentiation may happen is the correction step. The prediction matrix is used in the computation of Kalman gain so maybe that's how it's done?
    Am I correct?
  2. jcsd
  3. Jul 22, 2015 #2
    If you have a time series of position coordinates, there are several numerical approaches to computing velocities.

    THe simplest is just estimating the velocity as the change in the position vector over the change in the time interval between adjacent data points in the time series.
  4. Jul 22, 2015 #3
    Your answer seems obviously correct, but my question is: how does the Kalman filter do it? I don't see the above mentioned procedure (calculating differences) in any of the examples I've seen. Contrary, it seems that the KF somehow does in internally. Authors extend the state vector with the derivative, say velocity, and there you go: the velocity is estimated by the filter. I'm asking: how does it happen? Which step of the Kalman filter estimates velocity based on position measurements?
  5. Sep 1, 2015 #4
    I finally got it. If anybody's interested:

    At some point, the algorithm calculates difference between the predicted value and measured value, the deviation. It then serves as correction of both the value and its integral.

    It takes advantage of the fact that sum of many random variables with Gaussian distribution and zero mean is also Gaussian distribution with zero mean.

    Let's suppose that the system is in "stable" state, that means the predicted values are always correct. Measured values have some Gaussian noise. Then the deviation of the value will also have Gaussian noise but its mean value will be zero. The same happens with the integral.

    \Delta x(t) = x_{predict}(t) - x_{measure}(t) \\
    \sum_{i=0}^{t} \Delta x(i) = \sum_{i=0}^{t} x_{predict}(i) - \sum_{i=0}^{t} x_{measure}(i) \\
    E(\Delta x(t)) = 0 \\
    E(\sum_{i=0}^{t} \Delta x(i)) = 0

    Now suppose at the moment T = t + 1 some disturbance happens to the system and the predicted and measured values are no longer in agreement.

    T = t + 1 \\
    \Delta x(T) = x_{predict}(T) - x_{measure}(T) \\
    \sum_{i=0}^{T} \Delta x(i) = \sum_{i=0}^{T} x_{predict}(i) - \sum_{i=0}^{T} x_{measure}(i)

    We have:

    \sum_{i=0}^{T} \Delta x(i) = \sum_{i=0}^{t} \Delta x(i) + \Delta x(T) \\
    E(\sum_{i=0}^{T} \Delta x(i)) = E(\sum_{i=0}^{t} \Delta x(i) + \Delta x(T)) \\
    E(\sum_{i=0}^{T} \Delta x(i)) = E(\sum_{i=0}^{t} \Delta x(i)) + E(\Delta x(T)) \\
    E(\sum_{i=0}^{T} \Delta x(i)) = E(\Delta x(T))

    The instant moment after the disturbance, mean values of the deviations of the value and of its integral are equal. That means, the deviation of the value is equally good for correcting the value and its integral. After a while this might not be strictly correct, but is still good enough, especially if the system is converging to the "stable" state again.
Share this great discussion with others via Reddit, Google+, Twitter, or Facebook