Need help for multisensor fusion mobile robot
I'm a beginner at ROS and I've some questions regarding multisensor fusion. I am currently working on a robot (Pioneer P3-AT) with a monocular vision and Hokuyo LRF without an a priori map. However, I am unclear about how to apply the proposed algorithms like EKF, Bayesian, etc. I've read many journals about applying feature extraction, where the camera extracts vertical edges and segmentation technique applied on the LRF.
It'll be awesome if I can get some explanation about which algorithms to use and if there are any libraries around. And also, isit possible to obtain the fused data and use the available libraries (e.g. GMapping, GridSLAM, OctoMap) to create a map, localize and navigate?