The only required software is Docker. Each SLAM method comes with its own Docker container, making setup straightforward. We recommend using VSCode with the Docker extension for an enhanced development experience. Additionally, we provide a Docker container with tools for evaluation.
When running the Dockerfiles, the first step is to navigate to the directory where the dataset is stored, as it will be mounted inside the Docker container.
Tested on Ubuntu 20.04 and 22.04 with CUDA versions 11 and 12, using NVIDIA GPUs including the RTX 4090, A5000, and A6000.
Coming soon.
Each method is available as a Docker container. When running the Dockerfiles, the first step is to enter the directory where the dataset is stored, as it will be mounted inside the Docker container.
ORB-SLAM3
We are using our fork of the ORB-SLAM3 ROS Wrapper implementation.
To launch the application:
roslaunch orb_slam3_ros <launch_file> \
do_bag:=<do_bag> bag:=<bag> \
do_save_traj:=<do_save_traj> \
traj_file_name:=<traj_file_name> \
do_lc:=<enable_loop_closing>
-
launch_file
: Specifies the launch file to use. Choices include:rover_mono_d435i.launch
: To launch monocular mode.rover_rgbd_d435i.launch
: To launch RGBD mode.
-
do_bag
: (Optional) Specifies whether to replay a bag. Set to either:true
: To replay a bag.false
: To not replay a bag.
-
bag
: (Optional) Specifies the path to the rosbag file. -
do_save_traj
: (Optional) Specifies whether to save a predicted trajectory. Set to either:true
: To save the trajectory.false
: To not save the trajectory.
-
traj_file_name
: (Optional) Specifies the file path where the estimated trajectory should be saved. -
do_lc
: (Optional) Specifies whether to enable loop closing. Set to either:true
: To enable loop closing.false
: To disable loop closing.
DROID-SLAM
We are using our fork of the official DROID-SLAM implementation.
Example to run the application and evaluation:
python evaluation_scripts/test_rover_d435i.py \
--data_path /garden_small/2023-08-18 \
--ground_truth_path /garden_small/2023-08-18/ground_truth.txt \
--output_path ./rover_trajectories
base_data_path
: Specifies the base directory of the dataset sequence.ground_truth_path
: Path to the ground truth file for the selected dataset sequence.output_path
: Directory where the resulting trajectories will be stored.
To test DROID-SLAM in RGBD mode (Camera D435i), add the flag --depth
, for Stereo mode (Camera T265) add --stereo
.
DPV-SLAM
We are using our fork of the official DPVO / DPV-SLAM implementation.
Note: The container currently does not support visualization.
Example to run the application and evaluation:
python evaluate_rover.py \
--base_data_path /garden_small/2023-08-18 \
--ground_truth_path /garden_small/2023-08-18/ground_truth.txt \
--output_path ./rover_trajectories \
--cameras d435i \
--trials 5 \
--opts LOOP_CLOSURE True
base_data_path
: Specifies the base directory of the dataset sequence.ground_truth_path
: Path to the ground truth file for the selected dataset sequence.output_path
: Directory where the resulting trajectories will be stored.cameras
: List of cameras to be used for the evaluation. Choices:d435i
,t265
, orpi_cam
.trials
: The number of trials to execute for the evaluation.opts
: Specifies additional optional arguments.
To enable Loop Closing for DPV-SLAM, the argument: --opts LOOP_CLOSURE True
has to be set.
Orbeez-SLAM
We are using our fork of the official Orbeez-SLAM implementation.
Example to run the application in monocular mode:
./build/mono_rover \
./Vocabulary/ORBvoc.txt \
./configs/Monocular/ROVER/d435i.yaml \
"/path/to/data/d435i" \
"/output/dir"
Example to run the application in RGBD mode:
./build/rgbd_rover \
./Vocabulary/ORBvoc.txt \
./configs/RGB-D/ROVER/d435i.yaml \
"/path/to/data/d435i" \
"/path/to/data/associations.txt" \
"/output/dir"
Additionally you can have a look at the script run_rover.sh
that runs all of the experiments.
GO-SLAM
We are using our fork of the official GO-SLAM implementation.
Example to run the application in monocular mode:
python run_rover.py <config> \
--device <device> \
--input_folder /path/to/input_folder \
--output /path/to/output_folder \
--mode <mode> \
--only_tracking
config
: Path to the configuration file that contains the settings for the SLAM system. For ROVER dataset use.configs/ROVER/d435i.yaml
.device
: Specifies the computing device to run the script on. Default iscuda:0
, meaning the first GPU.input_folder
: The path to the input folder containing data.output
: The path where the results will be stored.mode
: The SLAM mode to use. Choose frommono
,rgbd
, orstereo
.only_tracking
: If set, only tracking will be triggered without mapping.
GlORIE-SLAM
We are using our fork of the official GlORIE-SLAM implementation.
Example to run the application:
python run.py <config> \
--input_dir /path/to/input_folder \
--output_dir /path/to/output_folder \
--only_tracking
config
: Path to the configuration file that contains the settings for the SLAM system. For ROVER dataset use.configs/ROVER/d435i.yaml
.input_folder
: The path to the input folder containing data.output
: The path where the results will be stored.only_tracking
: If set, only tracking will be triggered without mapping.
Additionally you can have a look at the script run_rover_all.sh
that runs all of the experiments.
Co-SLAM
We are using our fork of the official Co-SLAM implementation.
Example to run the application:
python coslam_rover.py \
--config /path/to/config.yaml \
--input_folder /path/to/input_folder \
--output /path/to/output_folder
config
: Path to the configuration file that contains the settings for the SLAM system. For ROVER dataset use.configs/ROVER/d435i.yaml
.input_folder
: The path to the input folder containing data.output
: The path where the results will be stored.
Additionally you can have a look at the script run_rover_all.sh
that runs all of the experiments.
MonoGS
We are using our fork of the official MonoGS implementation.
Note: The container currently does not support visualization.
Example to run the application:
python run_slam_rover.py \
--config /path/to/config.yaml \
--data_path /path/to/input_folder \
--output_path /path/to/output_folder \
--eval
config
: Path to the configuration file that contains the settings for the SLAM system. For ROVER dataset use.configs/[mono/rgbd]/ROVER/d435i.yaml
.data_path
: The path to the input folder containing data.output_path
: The path where the results will be stored.eval
: Enables evaluation of results.
Additionally you can have a look at the script run_rover_all.sh
that runs all of the experiments.
Photo-SLAM
We are using our fork of the official Photo-SLAM implementation.
Note: The container currently does not support visualization.
Example to run the application in monocular mode:
./bin/rover_mono \
./ORB-SLAM3/Vocabulary/ORBvoc.txt \
./cfg/ORB_SLAM3/Monocular/ROVER/d435i.yaml \
./cfg/gaussian_mapper/Monocular/ROVER/rover_mono.yaml \
"/path/to/data/d435i" \
"/output/dir"
no_viewer
Example to run the application in RGBD mode:
./bin/rover_rgbd \
./ORB-SLAM3/Vocabulary/ORBvoc.txt \
./cfg/ORB_SLAM3/RGB-D/ROVER/d435i.yaml \
./cfg/gaussian_mapper/RGB-D/ROVER/rover_rgbd.yaml \
"/path/to/data/d435i" \
"/path/to/data/associations.txt" \
"/output/dir"
no_viewer
Additionally you can have a look at the script scripts/rover_all.sh
that runs all of the experiments.
Splat-SLAM
We are using our fork of the official GlORIE-SLAM implementation.
Example to run the application:
python run.py <config> \
--input_dir /path/to/input_folder \
--output_dir /path/to/output_folder \
--only_tracking
config
: Path to the configuration file that contains the settings for the SLAM system. For ROVER dataset use.configs/ROVER/d435i.yaml
.input_dir
: The path to the input folder containing data.output_dir
: The path where the results will be stored.only_tracking
: If set, only tracking will be triggered without mapping.
Additionally you can have a look at the script run_rover_all.sh
that runs all of the experiments.
GS-ICP-SLAM
Coming soon.Coming soon.
If you find our work useful, please consider citing:
@article{schmidt2024nerfgsbenchmark,
title={NeRF and Gaussian Splatting SLAM in the Wild},
author={Fabian Schmidt and Markus Enzweiler and Abhinav Valada},
year={2024},
eprint={2412.03263},
archivePrefix={arXiv},
primaryClass={cs.RO},
url={https://arxiv.org/abs/2412.03263},
}