arXiv | IEEE Xplore | Website | Video
This repository is the official implementation of the paper:
Automatic Target-Less Camera-LiDAR Calibration from Motion and Deep Point Correspondences
Kürsat Petek*, Niclas Vödisch*, Johannes Meyer, Daniele Cattaneo, Abhinav Valada, and Wolfram Burgard.
*Equal contribution.IEEE Robotics and Automation Letters, vol. 9, issue 11, pp. 9978-9985, November 2024.
If you find our work useful, please consider citing our paper:
@article{petek2024mdpcalib,
author={Petek, Kürsat and Vödisch, Niclas and Meyer, Johannes and Cattaneo, Daniele and Valada, Abhinav and Burgard, Wolfram},
journal={IEEE Robotics and Automation Letters},
title={Automatic Target-Less Camera-LiDAR Calibration From Motion and Deep Point Correspondences},
year={2024},
volume={9},
number={11},
pages={9978-9985}
}
Sensor setups of robotic platforms commonly include both camera and LiDAR as they provide complementary information. However, fusing these two modalities typically requires a highly accurate calibration between them. In this paper, we propose MDPCalib which is a novel method for camera-LiDAR calibration that requires neither human supervision nor any specific target objects. Instead, we utilize sensor motion estimates from visual and LiDAR odometry as well as deep learning-based 2D-pixel-to-3D-point correspondences that are obtained without in-domain retraining. We represent the camera-LiDAR calibration as a graph optimization problem and minimize the costs induced by constraints from sensor motion and point correspondences. In extensive experiments, we demonstrate that our approach yields highly accurate extrinsic calibration parameters and is robust to random initialization. Additionally, our approach generalizes to a wide range of sensor setups, which we demonstrate by employing it on various robotic platforms including a self-driving perception car, a quadruped robot, and a UAV.
⚠️ Note: As of now, we have only released the coarse calibration. We will upload the refinement step upon the official public release of CMRNext.
Tested with Docker version 27.2.1
and Docker Compose version v2.29.2
.
- To build the image, run
docker compose build
in the root of this repository. - Prepare using GUIs in the container:
xhost +local:docker
. - Start container and mount rosbags:
docker compose run -v PATH_TO_DATA:/data -it mdpcalib
- Connect to a running container:
docker compose exec -it mdpcalib bash
We used multiple githooks during the development of this code. You can set up them with the following steps:
- Create a venv or conda environment. Make sure to source that before committing.
- Install requirements:
pip install -r requirements.txt
- [Not yet released] Install CMRNet requirements for pylint to work:
pip install -r src/CMRNet/rquirements.txt
- Install pre-commit githook scripts:
pre-commit install
Python formatter (yapf, iSort) settings can be set in pyproject.toml. C++ formatter (ClangFormat) settings are in .clang-format.
This will automatically run the pre-commit hooks on every commit. You can skip this using the --no-verify
flag, e.g., commit -m "Update node" --no-verify
.
To run the githooks on all files independent of doing a commit, use pre-commit run --all-files
In the public release of our MDPCalib, we provide instructions for running camera-LiDAR calibration on the KITTI dataset.
- Install the provided
kitti2bag
package from within the package directory:pip install -e .
- Download the raw "synced+rectified" and "calibration" data for an odometry sequence. The mapping is available here.
- E.g., for sequence 00, download the residential data
2011_10_03_drive_0027
: - Unzip the files.
- Then run:
kitti2bag -t 2011_10_03 -r 0027 raw_synced
. - This will result in a rosbag:
kitti_2011_10_03_drive_0027_synced.bag
. - For the following instructions, we will assume that the rosbag is located at
/data/kitti/raw/
.- The folder can be changed in the launchers play_bag_kitti_left.launch and play_bag_kitti_right.launch.
For changing the configuration settings, please consult the config file.
Note that there we also specify the name of the experiment that creates an output folder under /root/catkin_ws/src/mdpcalib/experiments
.
Please execute the following steps in separate terminals:
- Start a roscore:
roscore
- [Optional] Start rviz:
roscd pose_synchronizer; rviz -d rviz/combined.rviz
- [Not yet released] Launch CMRNext:
roslaunch cmrnet cmrnet_kitti.launch
- Launch the optimizer:
roslaunch optimization_utils optimizer.launch
- Launch the data synchronizer:
roslaunch pose_synchronizer pose_synchronizer_fastlo_kitti.launch
- Wait until the ORB vocabulary has been loaded.
- Play the data (left camera):
roslaunch pose_synchronizer play_bag_kitti_left.launch
- Alternatively (right camera):
roslaunch pose_synchronizer play_bag_kitti_right.launch
- Alternatively (right camera):
- [Optional] Once the fine calibration is finished, you could stop playing the data (Ctrl + c).
For academic usage, the code is released under the GPLv3 license. Components of other works are released under their original license. For any commercial purpose, please contact the authors.
In our work and experiments, we have used components from many other works. We thank the authors for open-sourcing their code. In no specific order, we list source repositories:
- CMRNext: https://github.com/robot-learning-freiburg/CMRNext (not released)
- ORB SLAM3 ROS Wrapper: https://github.com/thien94/orb_slam3_ros_wrapper
- kitti2bag: https://github.com/tomas789/kitti2bag
- FAST-LO: https://github.com/hku-mars/LiDAR_IMU_Init
This work was funded by the German Research Foundation (DFG) Emmy Noether Program grant No 468878300 and an academic grant from NVIDIA.