Skip to content

[RAL'24] Automatic Target-Less Camera-LiDAR Calibration from Motion and Deep Point Correspondences

License

Notifications You must be signed in to change notification settings

robot-learning-freiburg/MDPCalib

Repository files navigation

MDPCalib

arXiv | IEEE Xplore | Website | Video

This repository is the official implementation of the paper:

Automatic Target-Less Camera-LiDAR Calibration from Motion and Deep Point Correspondences

Kürsat Petek*, Niclas Vödisch*, Johannes Meyer, Daniele Cattaneo, Abhinav Valada, and Wolfram Burgard.
*Equal contribution.

IEEE Robotics and Automation Letters, vol. 9, issue 11, pp. 9978-9985, November 2024.

Overview of MDPCalib approach

If you find our work useful, please consider citing our paper:

@article{petek2024mdpcalib,
  author={Petek, Kürsat and Vödisch, Niclas and Meyer, Johannes and Cattaneo, Daniele and Valada, Abhinav and Burgard, Wolfram},
  journal={IEEE Robotics and Automation Letters},
  title={Automatic Target-Less Camera-LiDAR Calibration From Motion and Deep Point Correspondences},
  year={2024},
  volume={9},
  number={11},
  pages={9978-9985}
}

📔 Abstract

Sensor setups of robotic platforms commonly include both camera and LiDAR as they provide complementary information. However, fusing these two modalities typically requires a highly accurate calibration between them. In this paper, we propose MDPCalib which is a novel method for camera-LiDAR calibration that requires neither human supervision nor any specific target objects. Instead, we utilize sensor motion estimates from visual and LiDAR odometry as well as deep learning-based 2D-pixel-to-3D-point correspondences that are obtained without in-domain retraining. We represent the camera-LiDAR calibration as a graph optimization problem and minimize the costs induced by constraints from sensor motion and point correspondences. In extensive experiments, we demonstrate that our approach yields highly accurate extrinsic calibration parameters and is robust to random initialization. Additionally, our approach generalizes to a wide range of sensor setups, which we demonstrate by employing it on various robotic platforms including a self-driving perception car, a quadruped robot, and a UAV.

👩‍💻 Code

⚠️ Note: As of now, we have only released the coarse calibration. We will upload the refinement step upon the official public release of CMRNext.

💻 Development

Docker 🐋

Tested with Docker version 27.2.1 and Docker Compose version v2.29.2.

  • To build the image, run docker compose build in the root of this repository.
  • Prepare using GUIs in the container: xhost +local:docker.
  • Start container and mount rosbags: docker compose run -v PATH_TO_DATA:/data -it mdpcalib
  • Connect to a running container: docker compose exec -it mdpcalib bash

Githooks ✅

We used multiple githooks during the development of this code. You can set up them with the following steps:

  1. Create a venv or conda environment. Make sure to source that before committing.
  2. Install requirements: pip install -r requirements.txt
  3. [Not yet released] Install CMRNet requirements for pylint to work: pip install -r src/CMRNet/rquirements.txt
  4. Install pre-commit githook scripts: pre-commit install

Python formatter (yapf, iSort) settings can be set in pyproject.toml. C++ formatter (ClangFormat) settings are in .clang-format.

This will automatically run the pre-commit hooks on every commit. You can skip this using the --no-verify flag, e.g., commit -m "Update node" --no-verify. To run the githooks on all files independent of doing a commit, use pre-commit run --all-files

💾 Data

In the public release of our MDPCalib, we provide instructions for running camera-LiDAR calibration on the KITTI dataset.

Generating a KITTI rosbag 🐈

🏃 Running the calibration

For changing the configuration settings, please consult the config file. Note that there we also specify the name of the experiment that creates an output folder under /root/catkin_ws/src/mdpcalib/experiments.

Please execute the following steps in separate terminals:

  1. Start a roscore: roscore
  2. [Optional] Start rviz: roscd pose_synchronizer; rviz -d rviz/combined.rviz
  3. [Not yet released] Launch CMRNext: roslaunch cmrnet cmrnet_kitti.launch
  4. Launch the optimizer: roslaunch optimization_utils optimizer.launch
  5. Launch the data synchronizer: roslaunch pose_synchronizer pose_synchronizer_fastlo_kitti.launch
  6. Wait until the ORB vocabulary has been loaded.
  7. Play the data (left camera): roslaunch pose_synchronizer play_bag_kitti_left.launch
    • Alternatively (right camera): roslaunch pose_synchronizer play_bag_kitti_right.launch
  8. [Optional] Once the fine calibration is finished, you could stop playing the data (Ctrl + c).

👩‍⚖️ License

For academic usage, the code is released under the GPLv3 license. Components of other works are released under their original license. For any commercial purpose, please contact the authors.

🙏 Acknowledgment

In our work and experiments, we have used components from many other works. We thank the authors for open-sourcing their code. In no specific order, we list source repositories:

This work was funded by the German Research Foundation (DFG) Emmy Noether Program grant No 468878300 and an academic grant from NVIDIA.

DFG logo

About

[RAL'24] Automatic Target-Less Camera-LiDAR Calibration from Motion and Deep Point Correspondences

Resources

License

Stars

Watchers

Forks