M3EDS: A Multi-sensor, Multi-platform, Multi-scenario Dataset for SLAM Systems
This dataset offered three main contributions. The sensors included two pairs stereo event cameras in different resolution (640×480, 346×260), one infrared camera, four RGB cameras, two Visual-Inertial sensors (VI-sensors), four mechanical and one solid-state LiDARs, three inertial measurement units (IMUs), two Global Navigation Satellite and Inertial Navigation System with real-time kinematic signals (GNSS/INS-RTKs). All those sensors were well-calibrated and synchronized in hardware, and data were recorded simultaneously. we collected 55 sequences from multiple platforms, including a handheld equipment, a unmanned ground vehicle, a quadruped robot, a car, and a drone, under various challenging scenarios, including extreme illumination, aggressive motion, low-texture, urban driving scenarios, etc. The ground truth trajectories were captured by the motion capture device. We comprehensively evaluated state-of-the art SLAM approaches and identified their limitations. About the M3EDS, which could be found at https://neufs-ma.github.io/M3DSS/.