Extended Kalman Filter based SLAM for orientation

Click For Summary
SUMMARY

The discussion focuses on implementing an Extended Kalman Filter (EKF) for Simultaneous Localization and Mapping (SLAM) specifically for orientation and angular rates, utilizing a quaternion-based approach. The initial implementation faced issues with orientation states in certain ranges, leading to incorrect predictions. The problem was identified as a missing derivative in the quaternion conjugation process, which, once corrected, resolved the filter's behavior. The provided MATLAB code snippets illustrate the calculation of measurement models and Jacobians essential for the EKF update.

PREREQUISITES
  • Understanding of Extended Kalman Filter (EKF) principles
  • Familiarity with quaternion mathematics and their application in 3D rotations
  • Proficiency in MATLAB programming, particularly for implementing algorithms
  • Knowledge of SLAM concepts and their relevance in robotics
NEXT STEPS
  • Study quaternion algebra and its applications in 3D transformations
  • Learn about EKF tuning and optimization techniques for improved performance
  • Explore advanced SLAM algorithms beyond EKF, such as GraphSLAM
  • Investigate the implementation of sensor fusion techniques in SLAM systems
USEFUL FOR

Robotics engineers, algorithm developers, and researchers working on SLAM implementations or those interested in enhancing orientation estimation in robotic systems.

slamo
Messages
2
Reaction score
0
Dear all,

I'm trying to implement an Extended Kalman Filter based SLAM for orientation and angular rates.
To ease up things I discarded the estimation of position and linear velocities. I'm using a quaternion based approach with translation vectors of 3D-points/landmarks as measurements.

For tests I'm only using one landmark/measurement. Unfortunately the filter is not behaving as supposed. If the orientation-state (rotation only around z-axis) is within the range 90°-270° or multiples everything works just fine. If the orientation is in the other 180° range I get an innovation inverse to the correct one (the predicted state rotates in the false direction). The problem seems to be in the correction part of the filter. I'm pretty sure the prediction part has nothing to with the problem. The innovation vectors are also correct.
Changing the direction of the simulated rotation or the position of the landmark to another side of the robot shows no change.

Here is some of the formulas/matlab-code I use:

Code:
% Predict measurement -> measurement model h
%
% Calculate translation in world coordinates (feature-pos)
dx = x(fpos);
dy = x(fpos+1);
dz = x(fpos+2);
%
% Get rotation matrix
R = quat2RotMat( quatConjungate(x(1:4)) );
%
zp = R * [dx; dy; dz];


% Calculate H_clm for this landmark (H_c 0 ... 0 H_lm 0 ... 0 ) -> h
% derived by state
%
qr = x(1);
qx = -x(2);
qy = -x(3);
qz = -x(4);
%
%
% dht_dqc
%
dht_dqrc = 2* [ qr -qz  qy;
                qz  qr  -qx;
               -qy  qx  qr ] * [dx; dy; dz];
%
dht_dqxc = 2* [ qx  qy  qz;
                qy -qx  -qr;
                qz  qr  -qx ] * [dx; dy; dz];
%
dht_dqyc = 2* [-qy  qx  qr;
                qx  qy  qz;
               -qr  qz  -qy ] * [dx; dy; dz];
%          
dht_dqzc = 2* [-qz -qr  qx;
                qr -qz  qy;
                qx  qy  qz ] * [dx; dy; dz];
%
% dht_ dyi
%
dht_dyi = R;

% Part of the camera Hc
%
Hcy(:,1:7) = [dht_dqrc dht_dqxc dht_dqyc dht_dqzc zeros(3)];

% Part of the landmark Hy
%
Hcy(:,fpos:fpos+Nxf-1) = dht_dyi;

with

Code:
function R = quat2RotMat(q_in)

qzqz = q_in(4)^2;
qyqy = q_in(3)^2;
qxqx = q_in(2)^2;
qrqr = q_in(1)^2;
qrqz = q_in(1)*q_in(4);
qxqy = q_in(2)*q_in(3);
qrqy = q_in(1)*q_in(3);
qxqz = q_in(2)*q_in(4);
qyqz = q_in(3)*q_in(4);
qrqx = q_in(1)*q_in(2);

R = [ qrqr+qxqx-qyqy-qzqz -2*qrqz+2*qxqy      2*qrqy+2*qxqz;
      2*qxqy+2*qrqz       qrqr-qxqx+qyqy-qzqz -2*qrqx+2*qyqz;
      -2*qrqy+2*qxqz      2*qyqz+2*qrqx       qrqr-qxqx-qyqy+qzqz ];

The EKF-update is:

Code:
% filter gain
K = PX * H' * inv(H*PX*H' + R)
 
% updated state and covariance
XX = XX + K * v;
PX = (eye(size(PX,1)) - K*H ) * PX;
 
% Normalize the quaternion
%
XX(1:4) = XX(1:4) / norm(XX(1:4))
Jnorm = eye(size(XX,1));
Jnorm(1:4,1:4) = normJac(XX(1:4));
PX = Jnorm * PX * Jnorm';


Is there something I’m missing? I'm really stuck with this problem. So any help would be welcome.
 
Engineering news on Phys.org
Ok, I found it:

Code:
% dht_dqc
%
dht_dqc = zeros(3,4);
%
dht_dqrc = 2* [ qr -qz  qy;
                qz  qr  -qx;
               -qy  qx  qr ] * [dx; dy; dz];
%
dht_dqxc = 2* [ qx  qy  qz;
                qy -qx  -qr;
                qz  qr  -qx ] * [dx; dy; dz];
%
dht_dqyc = 2* [-qy  qx  qr;
                qx  qy  qz;
               -qr  qz  -qy ] * [dx; dy; dz];
%          
dht_dqzc = 2* [-qz -qr  qx;
                qr -qz  qy;
                qx  qy  qz ] * [dx; dy; dz];
%
dht_dqc = [dht_dqrc dht_dqxc dht_dqyc dht_dqzc] * diag([1 -1 -1 -1]);
%
%
% dht_ dyi
%
dht_dyi = Rcw;

% Part of the camera Hc
%
Hcy(:,1:13) = [dht_dtc dht_dqc zeros(3) zeros(3)];

The derivate of the conjungation process was missing. Now everything works fine :)
 

Similar threads

  • · Replies 7 ·
Replies
7
Views
23K
Replies
9
Views
7K