EdgeRegNet: Edge Feature-based Multi-modal Registration Network between Images and LiDAR Point Clouds
This repository is the official implementation of [EdgeRegNet: Edge Feature-based Multi-modal Registration Network between Images and LiDAR Point Clouds]
You can set up the Python environment using the following command:
conda create -n regis2D_y3D python==3.8.5 -y
conda activate regis2D_3D
pip install numpy pillow opencv-python scipy pandas matplotlib
If need GPU for training and testing, install the appropriate PyTorch version for your GPU drivers:
pip3 install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu121
Before the evaluation, you should download the KITTI dataset and nuScenes dataset, or use our proprocessed dataset by this link, which is much more convenient. In Addition, pretrained model also needs to be downloaded here and saved in ./ck folder. After that, run test.py to start evalutation.
python test.py
After this process, KITTI.csv will be created in the root folder, which contains the result of experment.
The details of training and testing will be provided after the paper is accepted.
MIT License
Copyright (c)