SLAM Subsystem

Spring Semester Implementation Details

Simulation

Milestone For SVD
We are currently using the real-time appearance-based mapping (RTAB) algorithm for semi-dense mapping. RTAB was chosen due to its versatility in sensor combination and its ability to take in odometry as input. Initially, the robot’s current state was estimated from fusing the wheel odometry with the filtered IMU data using an extended Kalman Filter. This approach was later dropped due to erroneous wheel odometry generated due to wheel slip in both simulation and hardware. Currently, odometry is generated using the ICP algorithm which registers current pointcloud with the original map cloud. It also utilizes the IMU data as an initial estimate. The point clouds are preprocessed for better registration. The point cloud from the depth camera and the 3D lidar are fused together by a custom node to better register objects closer to the robot. We also perform RGB to depth registration using the rgbd_sync nodelet of RTAB. A few parameters in the configuration of RTAB that were crucial in getting the desired performance are:
  • Setting ICP/PointToPlane to True (Other option is point to point)
  • Voxel Size : 0.2m (Affects runtime/publish rate)
  • Grid/RayTracing is set to True (Improves fill-in of the 2D gridmap)
  • NeighborLinkRefining is set to True (Optimizes graph based on loop closures)
The current major challenge for the localization system is a discontinuity in pose estimates. As the current pose is provided by an iterative algorithm, there is no constraint on a current pose being continuous w.r.t the previous pose. This results in the robot jumping by small amounts on the map. Although the accuracy of the pose localization is within limits, this discontinuity is a cause of concern for the planner.
Prior Work

The algorithm we’re using here is Real-Time Appearance Based Mapping (RTAB-map), primarily because our system uses a similar sensor suite that this algorithm utilizes for SLAM (i.e. a laser scanner, a stereo camera, and a wheel encoder), and that this system will provide us with a semi-dense map, which will fulfill our sponsor requirement. In order to overcome drift, we have set up the SLAM stack to map using point clouds and localize using loop closure from the camera RGB-D images. We fuse the LiDAR and RealSense point clouds to observe the environment. Some crucial parameters that we modified include setting “subscribe_rgbd” to true, “subscribe_scan_cloud” to true, Grid/RayTracing” to true.

To illustrate more about sensor fusion, there are two things worth bringing up. First, the RGB and the depth image are synchronized with a nodelet called rtabmap_ros/rgbd_sync provided by Rtabmap. The synchronized image is republished as rgbd_image and later fed into Rtabmap. Second, we wrote a new node combining the depth point cloud coming from the front camera and the 3D point cloud coming from the 3D lidar in order to make full use of those data. The combined point cloud message is republished through the ROS network, and we down-project it into a grid map in Rtabmap.

Below are maps we generated using the abovementioned configurations. The obstacles are the pre-occupied cars in the parking lot.

Hardware

We have tested our current SLAM subsystem stack on hardware with our custom PCB embedded. Since we have no access to the campus this semester, the test was done in Subbu’s house. We tele-operate the Husky around the first floor while running ICP odometry as well as RTAB-mapping.

2D grid map