How does the Kalman filter calculate derivatives?

Click For Summary

Discussion Overview

The discussion revolves around the operation of the Kalman filter, specifically how it estimates derivatives such as velocity from position measurements. Participants explore the theoretical underpinnings and practical implications of the filter's mechanics, including its prediction and correction steps.

Discussion Character

  • Technical explanation
  • Conceptual clarification
  • Debate/contested

Main Points Raised

  • One participant questions how the Kalman filter computes higher derivatives like speed from position data, noting that there are no direct sensors for these derivatives.
  • Another participant suggests that numerical methods can estimate velocity from a time series of position data by calculating differences over time intervals.
  • A participant expresses confusion about the absence of explicit difference calculations in Kalman filter examples, seeking clarification on how the filter internally estimates velocity based on position measurements.
  • One participant proposes that the algorithm corrects both the predicted value and its integral by calculating the deviation between predicted and measured values, leveraging properties of Gaussian distributions.
  • The same participant elaborates on the implications of disturbances in the system and how deviations can be used for corrections in both the value and its integral over time.

Areas of Agreement / Disagreement

Participants do not reach a consensus on the specific mechanism by which the Kalman filter estimates derivatives from position measurements. Multiple viewpoints and interpretations of the filter's operation are presented, indicating ongoing debate and exploration of the topic.

Contextual Notes

The discussion includes assumptions about the stability of the system and the nature of Gaussian noise, which are not fully resolved. The mathematical steps involved in the correction process are also not exhaustively detailed.

haael
Messages
537
Reaction score
35
Suppose we have a Kalman filter. We have a position sensor, for example GPS. We use the filter to estimate position. However in all examples I see higher derivatives in the state vector: speed, acceleration and sometimes jerk. There is no sensor that calculates these values directly, so they must be somehow calculated from the GPS readings.

There is a prediction matrix but it only tells us how to integrate, not diffrerentiate. The sample matrix for my class of filters looks like that:

\begin{array}{ccc} 1 & dt & dt^2/2 \\ 0 & 1 & dt \\ 0 & 0 & 1 \end{array}

I know it can update a parameter given all higher derivatives, i.e. position from speed. But how can it compute speed from position?

The only place where the differentiation may happen is the correction step. The prediction matrix is used in the computation of Kalman gain so maybe that's how it's done?
Am I correct?
 
Mathematics news on Phys.org
If you have a time series of position coordinates, there are several numerical approaches to computing velocities.

THe simplest is just estimating the velocity as the change in the position vector over the change in the time interval between adjacent data points in the time series.
 
Your answer seems obviously correct, but my question is: how does the Kalman filter do it? I don't see the above mentioned procedure (calculating differences) in any of the examples I've seen. Contrary, it seems that the KF somehow does in internally. Authors extend the state vector with the derivative, say velocity, and there you go: the velocity is estimated by the filter. I'm asking: how does it happen? Which step of the Kalman filter estimates velocity based on position measurements?
 
I finally got it. If anybody's interested:

At some point, the algorithm calculates difference between the predicted value and measured value, the deviation. It then serves as correction of both the value and its integral.

It takes advantage of the fact that sum of many random variables with Gaussian distribution and zero mean is also Gaussian distribution with zero mean.

Let's suppose that the system is in "stable" state, that means the predicted values are always correct. Measured values have some Gaussian noise. Then the deviation of the value will also have Gaussian noise but its mean value will be zero. The same happens with the integral.

\begin{equation*}
\Delta x(t) = x_{predict}(t) - x_{measure}(t) \\
\sum_{i=0}^{t} \Delta x(i) = \sum_{i=0}^{t} x_{predict}(i) - \sum_{i=0}^{t} x_{measure}(i) \\
E(\Delta x(t)) = 0 \\
E(\sum_{i=0}^{t} \Delta x(i)) = 0
\end{equation*}

Now suppose at the moment T = t + 1 some disturbance happens to the system and the predicted and measured values are no longer in agreement.

\begin{equation*}
T = t + 1 \\
\Delta x(T) = x_{predict}(T) - x_{measure}(T) \\
\sum_{i=0}^{T} \Delta x(i) = \sum_{i=0}^{T} x_{predict}(i) - \sum_{i=0}^{T} x_{measure}(i)
\end{equation*}

We have:

\begin{equation*}
\sum_{i=0}^{T} \Delta x(i) = \sum_{i=0}^{t} \Delta x(i) + \Delta x(T) \\
E(\sum_{i=0}^{T} \Delta x(i)) = E(\sum_{i=0}^{t} \Delta x(i) + \Delta x(T)) \\
E(\sum_{i=0}^{T} \Delta x(i)) = E(\sum_{i=0}^{t} \Delta x(i)) + E(\Delta x(T)) \\
E(\sum_{i=0}^{T} \Delta x(i)) = E(\Delta x(T))
\end{equation*}

The instant moment after the disturbance, mean values of the deviations of the value and of its integral are equal. That means, the deviation of the value is equally good for correcting the value and its integral. After a while this might not be strictly correct, but is still good enough, especially if the system is converging to the "stable" state again.
 

Similar threads

  • · Replies 6 ·
Replies
6
Views
5K
  • · Replies 6 ·
Replies
6
Views
6K
  • · Replies 1 ·
Replies
1
Views
1K
  • · Replies 1 ·
Replies
1
Views
2K
Replies
2
Views
2K
  • · Replies 4 ·
Replies
4
Views
2K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 4 ·
Replies
4
Views
5K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 5 ·
Replies
5
Views
3K