Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

Data fusion using extended kalman filter

  1. Jul 10, 2012 #1
    I am presently working with data fusion of redundant sensor data, basically trying to put together data from an IMU, a gyro and odometry....
    My question is whether the accuracy of the final state be greater than the accuracy of the most accurate sensor used in the ekf?

    Thanks in advance for the help!
  2. jcsd
  3. Jul 11, 2012 #2
    I'm not sure I understand the logic of your question. Are you asking if the accuracy of the signal from the filtered aggregate data would be greater than the most accurate signal (by your definition) in the aggregate (or "fused") signal data?
  4. Jul 11, 2012 #3
    Yes, that is exactly my question.... The project I am working on has a few redundant sensors...and i was wondering if it would be worth running an ekf
  5. Jul 11, 2012 #4
    Are you planning on using multiple sensors or just one? If it's the latter, then I would just run the most accurate one. The fact that you're using the extended filter indicates you have a non-linear situation. These are very sensitive to the initial state. If that state is an outlier on the Gaussian curve (for either the state transition or measurement noise) your model may fail. Given you already have an accurate sensor (a good signal to noise ratio), you won't get any improvement by aggregating it with noisier sensors unless I'm missing something here.
    Last edited: Jul 12, 2012
  6. Jul 12, 2012 #5
    I am running multiple sensors... My sensors and their outputs are as follows:

    1. IMU- linear velocity, yaw velocity and pitch velocity
    2. Gyro - yaw velocity
    3. odometry - linear velocity and yaw velocity

    My IMU and gyro are quite accurate... As of now I am using linear velocity from the odometry, yaw velocity from the gyro and pitch velocity from the IMU

    I was wondering if I could put all the 3 together so that i can get a more accurate result so that even if one of the sensors fail i still have others to compensate.
    Also my sensors operate asynchronously.

    My initial plan was to build 2 ekf's(one for the IMU data and one for the gyro data) and run them in parallel(depending on which type of sensor data comes in, that corresponding ekf is executed) and using the odometry data in the prediction stage.

    but now I'm not sure if doing that will be the best idea

    Any suggestion regarding this?

  7. Jul 12, 2012 #6
    I really can't speak to the technical issues of your situation. This is a math forum, not an engineering forum. From a mathematical perspective, the Kalman filter (usually) requires a specification of the desired final state. It's not clear to me what this is, but if it's your best sensor, you're already there, and your problem would seem to be bringing other sensors to that point if you need multiple sensors.

    A simple way to look at the filtering process is the so called Pontryagin Minimum Principle for minimizing the integral:

    [tex] J(\mu) = \int_{0}^{T} g(x, \mu) dt [/tex] where [itex]0\leq t \leq T [/itex]

    and where [itex]x(t)[/itex] is the system state which is connected to the desired state (control input) [itex]\mu(t)[/itex] by:

    [tex]\frac{dx}{dt}=f(x,\mu); x(0)=x_0[/tex]
    Last edited: Jul 12, 2012
  8. Jul 12, 2012 #7
    Thanks for the reply... I'll try implementing it and post results later.... Thanks!
  9. Jul 12, 2012 #8
    You're welcome.
  10. Jul 16, 2012 #9
    There are two separate ideas here; Sensor failure can be detected without resorting to a Kalman filter; These sensors are not grossly different... Merely calibrating the sensors at an intial time under two extreme conditions of motion, pitch, or yaw -- so that you can scale them to match numerically allows you to detect failure by simply noting when the values become different by more than an arbitrary threshold that you choose; Determine a sigma from continual measurements that 68% of the data falls within, then just choose a confidence interval to detect the sensor giving unreasonable data...

    The remaining problem is to decide *which* sensor is giving the false data after the failure is detected. This can be something as simple as noting which sensor abruptly changed its value, or as failures typically are of the whole sensor at once -- noticing which sensor is the locus of multiple discrepancies.

    As to improving the accuracy, a Kalman filter may be of use -- but I haven't enough experience with them to prove the issue. I generally have found other ways that are simpler (computationally) to implement.

    But -- as another idea; Being able to skew the sampling time from your sensors in a random manner, or physically injecting noise into the signal converting electronics may also allow you to improve the accuracy of the measurements. Typical conversion processes cause a truncation of data at a certain bit depth (a floor operation); by being able to generate electrical/signal noise equivalent to about half the least significant bit's analog signal's magnitude -- one can detect sub-lsb data by re-sampling and averaging. This is very useful for low frequency signals, and in your case -- that would be when the device is moving "slowly" -- and at a time where drift becomes quite appreciable...

    Also, this technique is useful to remove aliasing from sampling eg: Nyquist frequency -- by randomly changing the exact time that the sample is taken by a small fraction of the sampling time.

    Best wishes. !
Share this great discussion with others via Reddit, Google+, Twitter, or Facebook