This repository provides a ROS2 package that performs stereo visual simultaneous localization and mapping (VSLAM) and estimates stereo visual inertial odometry using the Isaac Elbrus GPU-accelerated library. It takes in a time-synced pair of stereo images (grayscale) along with respective camera intrinsics to publish the current pose of the camera relative to its start pose.
Elbrus is based on two core technologies: Visual Odometry (VO) and Simultaneous Localization and Mapping (SLAM).
Visual SLAM is a method for estimating a camera position relative to its start position. This method has an iterative nature. At each iteration, it considers two consequential input frames (stereo pairs). On both the frames, it finds a set of keypoints. Matching keypoints in these two sets gives the ability to estimate the transition and relative rotation of the camera between frames.
Simultaneous Localization and Mapping is a method built on top of the VO predictions. It aims to improve the quality of VO estimations by leveraging the knowledge of previously seen parts of a trajectory. It detects if the current scene was seen in the past (i.e. a loop in camera movement) and runs an additional optimization procedure to tune previously obtained poses.
Along with visual data, Elbrus can optionally use Inertial Measurement Unit (IMU) measurements. It automatically switches to IMU when VO is unable to estimate a pose; for example, when there is dark lighting or long solid featureless surfaces in front of a camera. Elbrus delivers real-time tracking performance: more than 60 FPS for VGA resolution. For the KITTI benchmark, the algorithm achieves a drift of ~1% in localization and an orientation error of 0.003 degrees per meter of motion. Elbrus allows for robust tracking in various environments and with different use cases: indoor, outdoor, aerial, HMD, automotive, and robotics.
To learn more about Elbrus SLAM click here.
- Isaac ROS Visual SLAM
Update 2022-06-30: Support for ROS2 Humble
This package is designed and tested to be compatible with ROS2 Humble running on Jetson or an x86_64 system with an NVIDIA GPU.
Platform | Hardware | Software | Notes |
---|---|---|---|
Jetson | Jetson Orin Jetson Xavier |
JetPack 5.0.1 DP | For best performance, ensure that power settings are configured appropriately. |
x86_64 | NVIDIA GPU | Ubuntu 20.04+ CUDA 11.6.1+ |
To simplify development, we strongly recommend leveraging the Isaac ROS Dev Docker images by following these steps. This will streamline your development environment setup with the correct versions of dependencies on both Jetson and x86_64 platforms.
Note: All Isaac ROS Quickstarts, tutorials, and examples have been designed with the Isaac ROS Docker images as a prerequisite.
This section describes the coordinate frames that are involved in the VisualSlamNode
. The frames discussed below are oriented as follows:
input_base_frame
: The name of the frame used to calculate transformation between baselink and left camera. The default value is empty (''), which means the value ofbase_frame_
will be used. Ifinput_base_frame_
andbase_frame_
are both empty, the left camera is assumed to be in the robot's center.input_left_camera_frame
: The frame associated with left eye of the stereo camera. Note that this is not the same as the optical frame. The default value is empty (''), which means the left camera is in the robot's center andleft_pose_right
will be calculated from the CameraInfo message.input_right_camera_frame
: The frame associated with the right eye of the stereo camera. Note that this is not the same as the optical frame. The default value is empty (''), which means left and right cameras have identity rotation and are horizontally aligned, soleft_pose_right
will be calculated from CameraInfo.input_imu_frame
: The frame associated with the IMU sensor (if available). It is used to calculateleft_pose_imu
.
-
Set up your development environment by following the instructions here.
-
Clone this repository and its dependencies under
~/workspaces/isaac_ros-dev/src
.cd ~/workspaces/isaac_ros-dev/src
git clone https://github.com/NVIDIA-ISAAC-ROS/isaac_ros_visual_slam
git clone https://github.com/NVIDIA-ISAAC-ROS/isaac_ros_common
-
Pull down a ROS Bag of sample data:
cd ~/workspaces/isaac_ros-dev/src/isaac_ros_visual_slam && \ git lfs pull -X "" -I isaac_ros_visual_slam/test/test_cases/rosbags/
-
Launch the Docker container using the
run_dev.sh
script:cd ~/workspaces/isaac_ros-dev/src/isaac_ros_common && \ ./scripts/run_dev.sh
-
Inside the container, build and source the workspace:
cd /workspaces/isaac_ros-dev && \ colcon build --symlink-install && \ source install/setup.bash
-
(Optional) Run tests to verify complete and correct installation:
colcon test --executor sequential
-
Run the following launch files in the current terminal:
ros2 launch isaac_ros_visual_slam isaac_ros_visual_slam.launch.py
-
In a second terminal inside the Docker container, prepare rviz to display the output:
source /workspaces/isaac_ros-dev/install/setup.bash && \ rviz2 -d src/isaac_ros_visual_slam/isaac_ros_visual_slam/rviz/default.cfg.rviz
-
In an another terminal inside the Docker container, run the following ros bag file to start the demo:
source /workspaces/isaac_ros-dev/install/setup.bash && \ ros2 bag play src/isaac_ros_visual_slam/isaac_ros_visual_slam/test/test_cases/rosbags/small_pol_test/
Rviz should start displaying the point clouds and poses like below:
To continue your exploration, check out the following suggested examples:
To customize your development environment, reference this guide.
ros2 launch isaac_ros_visual_slam isaac_ros_visual_slam.launch.py
ROS Parameter | Type | Default | Description |
---|---|---|---|
denoise_input_images |
bool |
false |
If enabled, input images are denoised. It can be enabled when images are noisy because of low-light conditions. |
rectified_images |
bool |
true |
Flag to mark if the incoming images are rectified or raw. |
enable_imu |
bool |
false |
If enabled, IMU data is used. |
enable_debug_mode |
bool |
false |
If enabled, a debug dump (image frames, timestamps, and camera info) is saved on to the disk at the path indicated by debug_dump_path |
debug_dump_path |
std::string |
/tmp/elbrus |
The path to the directory to store the debug dump data. |
input_base_frame |
std::string |
"" |
Name of the frame (baselink) to calculate transformation between the baselink and the left camera. Default is empty, which means the value of the base_frame will be used. If input_base_frame and base_frame are both empty, the left camera is assumed to be in the robot's center. |
input_left_camera_frame |
std::string |
"" |
The name of the left camera frame. the default value is empty, which means the left camera is in the robot's center and left_pose_right will be calculated from CameraInfo. |
input_right_camera_frame |
std::string |
"" |
The name of the right camera frame. The default value is empty, which means left and right cameras have identity rotation and are horizontally aligned. left_pose_right will be calculated from CameraInfo. |
input_imu_frame |
std::string |
imu |
Defines the name of the IMU frame used to calculate left_camera_pose_imu . |
gravitational_force |
std::vector<double> |
{0.0, 0, -9.8} |
The initial gravity vector defined in the odometry frame. If the IMU sensor is not parallel to the floor, update all the axes with appropriate values. |
publish_tf |
bool |
true |
If enabled, it will publish output frame hierarchy to TF tree. |
map_frame |
std::string |
map |
The frame name associated with the map origin. |
odom_frame |
std::string |
odom |
The frame name associated with the odometry origin. |
base_frame |
std::string |
base_link |
The frame name associated with the robot. |
enable_observations_view |
bool |
false |
If enabled, 2D feature pointcloud will be available for visualization. |
enable_landmarks_view |
bool |
false |
If enabled, landmark pointcloud will be available for visualization. |
enable_slam_visualization |
bool |
false |
Main flag to enable or disable visualization. |
enable_localization_n_mapping |
bool |
true |
If enabled, SLAM mode is on. If diabled, only Visual Odometry is on. |
path_max_size |
int |
1024 |
The maximum size of the buffer for pose trail visualization. |
ROS Topic | Interface | Description |
---|---|---|
/stereo_camera/left/image |
sensor_msgs/Image |
The image from the left eye of the stereo camera in grayscale. |
/stereo_camera/left/camera_info |
sensor_msgs/CameraInfo |
CameraInfo from the left eye of the stereo camera. |
/stereo_camera/right/image |
sensor_msgs/Image |
The image from the right eye of the stereo camera in grayscale. |
/stereo_camera/right/camera_info |
sensor_msgs/CameraInfo |
CameraInfo from the right eye of the stereo camera. |
visual_slam/imu |
sensor_msgs/Imu |
Sensor data from the IMU(optional). |
ROS Topic | Interface | Description |
---|---|---|
visual_slam/tracking/odometry |
nav_msgs/Odometry |
Odometry messages for the base_link . |
visual_slam/tracking/vo_pose_covariance |
geometry_msgs/PoseWithCovarianceStamped |
Current pose with covariance of the base_link . |
visual_slam/tracking/vo_pose |
geometry_msgs/PoseStamped |
Current pose of the base_link . |
visual_slam/tracking/slam_path |
nav_msgs/Path |
Trail of poses generated by SLAM. |
visual_slam/tracking/vo_path |
nav_msgs/Path |
Trail of poses generated by pure Visual Odometry. |
visual_slam/status |
isaac_ros_visual_slam_interfaces/VisualSlamStatus |
Status message for diagnostics. |
ROS Service | Interface | Description |
---|---|---|
visual_slam/reset |
isaac_ros_visual_slam_interfaces/Reset |
A service to reset the node. |
visual_slam/get_all_poses |
isaac_ros_visual_slam_interfaces/GetAllPoses |
A service to get the series of poses for the path traversed. |
visual_slam/set_odometry_pose |
isaac_ros_visual_slam_interfaces/SetOdometryPose |
A service to set the pose of the odometry frame. |
ROS Action | Interface | Description |
---|---|---|
visual_slam/save_map |
isaac_ros_visual_slam_interfaces/SaveMap |
An action to save the landmarks and pose graph into a map and onto the disk. |
visual_slam/load_map_and_localize |
isaac_ros_visual_slam_interfaces/LoadMapAndLocalize |
An action to load the map from the disk and localize within it given a prior pose. |
For solutions to problems with Isaac ROS, please check here.
- If RViz is not showing the poses, check the Fixed Frame value.
- If you are seeing
Tracker is lost.
messages frequently, it could be caused by the following issues:- Fast motion causing the motion blur in the frames.
- Low-lighting conditions.
- The wrong
camerainfo
is being published.
- For better performance:
- Increase the capture framerate from the camera to yield a better tracking result.
- If input images are noisy, you can use the
denoise_input_images
flag in the node.
Date | Changes |
---|---|
2022-06-30 | Support for ROS2 Humble |
2022-03-17 | Documentation update for new features |
2022-03-11 | Renamed to isaac_ros_visual_slam |
2021-11-15 | Isaac Sim HIL documentation update |
2021-10-20 | Initial release |