Skip to content

SLAM is mainly divided into two parts: the front end and the back end. The front end is the visual odometer (VO), which roughly estimates the motion of the camera based on the information of adjacent images and provides a good initial value for the back end.The implementation methods of VO can be divided into two categories according to whether …

License

Notifications You must be signed in to change notification settings

lianshangni0922/Visual-Odometry-Review

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

11 Commits
 
 
 
 
 
 

Repository files navigation

Visual-Odometry-Review

SLAM is mainly divided into two parts: the front end and the back end. The front end is the visual odometer (VO), which roughly estimates the motion of the camera based on the information of adjacent images and provides a good initial value for the back end.The implementation methods of VO can be divided into two categories according to whether features are extracted or not: feature point-based methods, and direct methods without feature points. VO based on feature points is stable and insensitive to illumination and dynamic objects

=======================================================================

github:https://github.com/MichaelBeechan

CSDN:https://blog.csdn.net/u011344545

=======================================================================

OF-VO:Robust and Efficient Stereo Visual Odometry Using Points and Feature Optical Flow

Code:https://github.com/MichaelBeechan/MyStereoLibviso2

SLAMBook

Paper:14 Lectures on Visual SLAM: From Theory to Practice,

Code:https://github.com/gaoxiang12/slambook

SVO: Fast Semi-Direct Monocular Visual Odometry

Paper:http://rpg.ifi.uzh.ch/docs/ICRA14_Forster.pdf

Code:https://github.com/uzh-rpg/rpg_svo

Robust Odometry Estimation for RGB-D Cameras

Real-Time Visual Odometry from Dense RGB-D Images

Paper:http://www.cs.nuim.ie/research/vision/data/icra2013/Whelan13icra.pdf

Code:https://github.com/tum-vision/dvo

Parallel Tracking and Mapping for Small AR Workspaces

Paper:https://cse.sc.edu/~yiannisr/774/2015/ptam.pdf

http://www.robots.ox.ac.uk/ActiveVision/Papers/klein_murray_ismar2007/klein_murray_ismar2007.pdf

Code:https://github.com/Oxford-PTAM/PTAM-GPL

ORBSLAM

Code1:https://github.com/raulmur/ORB_SLAM2

Code2:https://github.com/raulmur/ORB_SLAM

A ROS Implementation of the Mono-Slam Algorithm

Paper:https://www.researchgate.net/publication/269200654_A_ROS_Implementation_of_the_Mono-Slam_Algorithm

Code:https://github.com/rrg-polito/mono-slam

DTAM: Dense tracking and mapping in real-time

Paper:https://ieeexplore.ieee.org/document/6126513

Code:https://github.com/anuranbaka/OpenDTAM

LSD-SLAM: Large-Scale Direct Monocular SLAM

Paper:http://pdfs.semanticscholar.org/c13c/b6dfd26a1b545d50d05b52c99eb87b1c82b2.pdf

https://vision.in.tum.de/research/vslam/lsdslam

Code:https://github.com/tum-vision/lsd_slam

PaoPaoRobot

Code:https://github.com/PaoPaoRobot

ygz-slam

Code:https://github.com/PaoPaoRobot/ygz-slam

https://github.com/gaoxiang12/ygz-stereo-inertial

https://github.com/gaoxiang12/ORB-YGZ-SLAM

https://www.ctolib.com/generalized-intelligence-GAAS.html#5-ygz-slam

MYNT-EYE

Code:https://github.com/slightech

Kintinuous

Real-time Large Scale Dense RGB-D SLAM with Volumetric Fusion

Deformation-based Loop Closure for Large Scale Dense RGB-D SLAM

Robust Real-Time Visual Odometry for Dense RGB-D Mapping

Kintinuous: Spatially Extended KinectFusion

A method and system for mapping an environment

Code:https://github.com/mp3guy/Kintinuous

ElasticFusion

ElasticFusion: Dense SLAM Without A Pose Graph

ElasticFusion: Real-Time Dense SLAM and Light Source Estimation

Paper:http://www.thomaswhelan.ie/Whelan16ijrr.pdf http://thomaswhelan.ie/Whelan15rss.pdf

Code:https://github.com/mp3guy/ElasticFusion

MSCKF_VIO:Robust Stereo Visual Inertial Odometry for Fast Autonomous Flight

Paper:https://arxiv.org/abs/1712.00036

Code:https://github.com/KumarRobotics/msckf_vio

LIBVISO2: C++ Library for Visual Odometry 2

Paper:http://www.cvlibs.net/software/libviso/

Code:https://github.com/srv/viso2

Stereo Visual SLAM for Mobile Robots Navigation

A constant-time SLAM back-end in the continuum between global mapping and submapping: application to visual stereo SLAM

Paper:http://mapir.uma.es/famoreno/papers/thesis/FAMD_thesis.pdf

Code:https://github.com/famoreno/stereo-vo

Combining Edge Images and Depth Maps for Robust Visual Odometry

Robust Edge-based Visual Odometry using Machine-Learned Edges(REVO)

Paper:https://graz.pure.elsevier.com/

Code:https://github.com/fabianschenk/REVO

HKUST Aerial Robotics Group

VINS-Mono: A Robust and Versatile Monocular Visual-Inertial State Estimator

Paper:https://arxiv.org/pdf/1708.03852.pdf

Code:https://github.com/HKUST-Aerial-Robotics/VINS-Mono

VINS-Fusion:Online Temporal Calibration for Monocular Visual-Inertial Systems

Paper:https://arxiv.org/pdf/1808.00692.pdf

Code:https://github.com/HKUST-Aerial-Robotics/VINS-Fusion

Monocular Visual-Inertial State Estimation for Mobile Augmented Reality

Paper:https://ieeexplore.ieee.org/document/8115400

Code:https://github.com/HKUST-Aerial-Robotics/VINS-Mobile

VINet: Visual-Inertial Odometry as a Sequence-to-Sequence Learning Problem

Paper:https://arxiv.org/abs/1701.08376

Code:https://github.com/HTLife/VINet

Computer Vision Group TUM Department of Informatics Technical University of Munich

DSO: Direct Sparse Odometry

Code:https://github.com/JingeTu/StereoDSO

Visual-Inertial DSOhttps://vision.in.tum.de/research/vslam/vi-dso

DSO with Loop-closure and Sim(3) pose graph optimization:https://vision.in.tum.de/research/vslam/ldso

Stereo odometry based on careful feature selection and tracking

Paper:https://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=7324219

Code:https://github.com/Mayankm96/Stereo-Odometry-SOFT

OKVIS: Open Keyframe-based Visual-Inertial SLAM

Code:https://github.com/gaoxiang12/okvis

Trifo-VIO: Robust and Efficient Stereo Visual Inertial Odometry using Points and Lines

Paper:https://arxiv.org/pdf/1803.02403.pdf

Code:https://github.com/UMiNS/Trifocal-tensor-VIO

PL-VIO: Tightly-Coupled Monocular Visual–Inertial Odometry Using Point and Line Features

Paper:https://www.mdpi.com/1424-8220/18/4/1159/html

Overview of visual inertial navigation

A Review of Visual-Inertial Simultaneous Localization and Mapping from Filtering-Based and Optimization-Based Perspectives:

https://ieeexplore.ieee.org/document/5423178

https://www.mdpi.com/2218-6581/7/3/45

About

SLAM is mainly divided into two parts: the front end and the back end. The front end is the visual odometer (VO), which roughly estimates the motion of the camera based on the information of adjacent images and provides a good initial value for the back end.The implementation methods of VO can be divided into two categories according to whether …

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published