Collaborative SLAM

Heterogeneous multi-robot SLAM tackles the problem of combining information from multiple sensors (of different types), from multiple robots, to build a map and localize the agents within it. Though several advancements have been made in multi-robot SLAM, as well as heterogeneous sensor SLAM (on a single robot), their combination is relatively unexplored and introduces new challenges. This project studies this problem, and explores a new method to combine maps from multiple robots, possibly with heterogeneous sensor data. The stack leverages a global alignment module and a map-merging module using the generalized-ICP algorithm, to combine pointclouds from different robots, and produce a common global map. We implement the system on real robots equipped with LiDAR sensors and monocular cameras. The system, when tested with LiDAR data produces promising results, and we explain our assumptions for it to work well with visual data.

Access full report here 🔗

Some take aways:

● Designed and developed a 4-step algorithm to merge heterogeneous sensor maps among multiple robots. Steps involved - sharing point-clouds, feature extraction & matching, global transform computation and map-merging

● Implemented GICP algorithm in C++ to merge Lidar point clouds from 2 robots to generate a master map for multi-robot SLAM

Here are some results: