人工智能 | 开源SLAM、视觉里程计综述(SLAM、Visual Odometry)

视觉里程计综述

  • 引言
    • Visual Odometry or VSLAM
          • OF-VO:Robust and Efficient Stereo Visual Odometry Using Points and Feature Optical Flow
          • SLAMBook
          • SVO: Fast Semi-Direct Monocular Visual Odometry
          • Robust Odometry Estimation for RGB-D Cameras
          • Parallel Tracking and Mapping for Small AR Workspaces
          • ORBSLAM
          • A ROS Implementation of the Mono-Slam Algorithm
          • DTAM: Dense tracking and mapping in real-time
          • LSD-SLAM: Large-Scale Direct Monocular SLAM
          • RGBD-Odometry (Visual Odometry based RGB-D images)
          • Py-MVO: Monocular Visual Odometry using Python
          • Stereo-Odometry-SOFT
          • monoVO-python
          • DVO:Robust Odometry Estimation for RGB-D Cameras
          • Dense Visual Odometry and SLAM (dvo_slam)
          • REVO:Robust Edge-based Visual Odometry
          • xivo
          • PaoPaoRobot
          • ygz-slam
          • RTAB MAP
          • MYNT-EYE
          • Kintinuous
          • ElasticFusion
          • Co-Fusion:Real-time Segmentation, Tracking and Fusion of Multiple Objects
    • Visual Inertial Odometry or VIO-SLAM
          • R-VIO:Robocentric Visual-Inertial Odometry
          • Kimera-VIO: Open-Source Visual Inertial Odometry
          • ADVIO: An Authentic Dataset for Visual-Inertial Odometry
          • MSCKF_VIO:Robust Stereo Visual Inertial Odometry for Fast Autonomous Flight
          • LIBVISO2: C++ Library for Visual Odometry 2
          • Stereo Visual SLAM for Mobile Robots Navigation
          • Combining Edge Images and Depth Maps for Robust Visual Odometry
          • HKUST Aerial Robotics Group
          • VINS-Fusion:Online Temporal Calibration for Monocular Visual-Inertial Systems
          • Monocular Visual-Inertial State Estimation for Mobile Augmented Reality
          • Computer Vision Group TUM Department of Informatics Technical University of Munich
          • Visual-Inertial DSOhttps://vision.in.tum.de/research/vslam/vi-dso
          • Stereo odometry based on careful feature selection and tracking
          • OKVIS: Open Keyframe-based Visual-Inertial SLAM
          • Trifo-VIO: Robust and Efficient Stereo Visual Inertial Odometry using Points and Lines
          • PL-VIO: Tightly-Coupled Monocular Visual–Inertial Odometry Using Point and Line Features
          • Overview of visual inertial navigation
    • Based CNN(Net VO or Net VSLAM)
          • VINet: Visual-Inertial Odometry as a Sequence-to-Sequence Learning Problem
          • DeepVO: Towards End-to-End Visual Odometry with Deep Recurrent Convolutional Neural Networks
          • UnDeepVO - Implementation of Monocular Visual Odometry through Unsupervised Deep Learning
          • (ESP-VO) End-to-End, Sequence-to-Sequence Probabilistic Visual Odometry through Deep Neural Networks
    • Lidar Visual odometry
          • Lidar-Monocular Visual Odometry
          • RGBD and LIDAR
          • cartographer

#########################################
github:https://github.com/MichaelBeechan
CSDN:https://blog.csdn.net/u011344545
欢迎star/fork:https://github.com/MichaelBeechan/Visual-Odometry-Review
#########################################

这是一篇关于目前开源SLAM、开源VO视觉里程计的综述博客

引言

SLAM is mainly divided into two parts: the front end and the back end. The front end is the visual odometery(VO), which roughly estimates the motion of the camera based on the information of adjacent images and provides a good initial value for the back end.The implementation methods of VO can be divided into two categories according to whether features are extracted or not: feature point-based methods, and direct methods without feature points. VO based on feature points is stable and insensitive to illumination and dynamic objects.

Visual Odometry or VSLAM

OF-VO:Robust and Efficient Stereo Visual Odometry Using Points and Feature Optical Flow

Code:https://github.com/MichaelBeechan/MyStereoLibviso2

SLAMBook

Paper:14 Lectures on Visual SLAM: From Theory to Practice,

Code:https://github.com/gaoxiang12/slambook

SVO: Fast Semi-Direct Monocular Visual Odometry

Paper:http://rpg.ifi.uzh.ch/docs/ICRA14_Forster.pdf

Video: http://youtu.be/2YnIMfw6bJY

Code:https://github.com/uzh-rpg/rpg_svo

Robust Odometry Estimation for RGB-D Cameras

Real-Time Visual Odometry from Dense RGB-D Images
Paper:http://www.cs.nuim.ie/research/vision/data/icra2013/Whelan13icra.pdf

Code:https://github.com/tum-vision/dvo

Parallel Tracking and Mapping for Small AR Workspaces

Paper:https://cse.sc.edu/~yiannisr/774/2015/ptam.pdf

http://www.robots.ox.ac.uk/ActiveVision/Papers/klein_murray_ismar2007/klein_murray_ismar2007.pdf

Code:https://github.com/Oxford-PTAM/PTAM-GPL

ORBSLAM

Code1:https://github.com/raulmur/ORB_SLAM2

Code2:https://github.com/raulmur/ORB_SLAM

A ROS Implementation of the Mono-Slam Algorithm

Paper:https://www.researchgate.net/publication/269200654_A_ROS_Implementation_of_the_Mono-Slam_Algorithm

Code:https://github.com/rrg-polito/mono-slam

DTAM: Dense tracking and mapping in real-time

Paper:https://ieeexplore.ieee.org/document/6126513

Code:https://github.com/anuranbaka/OpenDTAM

LSD-SLAM: Large-Scale Direct Monocular SLAM

Paper:http://pdfs.semanticscholar.org/c13c/b6dfd26a1b545d50d05b52c99eb87b1c82b2.pdf

https://vision.in.tum.de/research/vslam/lsdslam

Code:https://github.com/tum-vision/lsd_slam

RGBD-Odometry (Visual Odometry based RGB-D images)

Real-Time Visual Odometry from Dense RGB-D Images
Code:https://github.com/tzutalin/OpenCV-RgbdOdometry

Paper:http://www.computer.org/csdl/proceedings/iccvw/2011/0063/00/06130321.pdf

Py-MVO: Monocular Visual Odometry using Python

Code:https://github.com/Transportation-Inspection/visual_odometry

Video:https://www.youtube.com/watch?v=E8JK19TmTL4&feature=youtu.be

Stereo-Odometry-SOFT

MATLAB Implementation of Visual Odometry using SOFT algorithm

Code:https://github.com/Mayankm96/Stereo-Odometry-SOFT

Paper:https://ieeexplore.ieee.org/document/7324219

monoVO-python

Code1:https://github.com/uoip/monoVO-pythone:https://github.com/uoip/monoVO-python

Code2:https://github.com/yueying/LearningVO

DVO:Robust Odometry Estimation for RGB-D Cameras

Code:https://github.com/tum-vision/dvo

https://vision.in.tum.de/data/software/dvo

Paper:https://www.researchgate.net/publication/221430091_Real-time_visual_odometry_from_dense_RGB-D_images

Dense Visual Odometry and SLAM (dvo_slam)

Code:https://github.com/tum-vision/dvo_slam

https://vision.in.tum.de/data/software/dvo

Paper:https://www.researchgate.net/publication/261353146_Dense_visual_SLAM_for_RGB-D_cameras

REVO:Robust Edge-based Visual Odometry

Combining Edge Images and Depth Maps for Robust Visual Odometry
Robust Edge-based Visual Odometry using Machine-Learned Edges
Code:https://github.com/fabianschenk/REVO

Paper:https://graz.pure.elsevier.com/

xivo

X Inertial-aided Visual Odometry

Code:https://github.com/ucla-vision/xivo

Paper:XIVO: X Inertial-aided Visual Odometry and Sparse Mapping

PaoPaoRobot

Code:https://github.com/PaoPaoRobot

ygz-slam

Code:https://github.com/PaoPaoRobot/ygz-slam

https://github.com/gaoxiang12/ygz-stereo-inertial

https://github.com/gaoxiang12/ORB-YGZ-SLAM

https://www.ctolib.com/generalized-intelligence-GAAS.html#5-ygz-slam

RTAB MAP

RTAB MAP - Real-Time Appearance-Based Mapping. Available on ROS
Online Global Loop Closure Detection for Large-Scale Multi-Session Graph-Based SLAM, 2014 Appearance-Based Loop Closure Detection for Online Large-Scale and Long-Term Operation, 2013

MYNT-EYE

Code:https://github.com/slightech

Kintinuous

Real-time Large Scale Dense RGB-D SLAM with Volumetric Fusion
Deformation-based Loop Closure for Large Scale Dense RGB-D SLAM
Robust Real-Time Visual Odometry for Dense RGB-D Mapping
Kintinuous: Spatially Extended KinectFusion
A method and system for mapping an environment
Code:https://github.com/mp3guy/Kintinuous

ElasticFusion

ElasticFusion: Dense SLAM Without A Pose Graph
ElasticFusion: Real-Time Dense SLAM and Light Source Estimation
Paper:http://www.thomaswhelan.ie/Whelan16ijrr.pdf http://thomaswhelan.ie/Whelan15rss.pdf

Code:https://github.com/mp3guy/ElasticFusion

Co-Fusion:Real-time Segmentation, Tracking and Fusion of Multiple Objects

Paper:http://visual.cs.ucl.ac.uk/pubs/cofusion/index.html

Visual Inertial Odometry or VIO-SLAM

R-VIO:Robocentric Visual-Inertial Odometry

(Kimera-VIO is a Visual Inertial Odometry pipeline for accurate State Estimation from Stereo + IMU data.)

Code:https://github.com/rpng/R-VIO

Paper:https://arxiv.org/abs/1805.04031

Kimera-VIO: Open-Source Visual Inertial Odometry

Code:https://github.com/MIT-SPARK/Kimera-VIO

Paper:https://arxiv.org/abs/1910.02490

Kimera: an Open-Source Library for Real-Time Metric-Semantic Localization and Mapping

ADVIO: An Authentic Dataset for Visual-Inertial Odometry

Code:https://github.com/AaltoVision/ADVIO

Paper:https://arxiv.org/abs/1807.09828

Data:https://zenodo.org/record/1476931#.XgCvYVIza00

MSCKF_VIO:Robust Stereo Visual Inertial Odometry for Fast Autonomous Flight

Paper:https://arxiv.org/abs/1712.00036

Code:https://github.com/KumarRobotics/msckf_vio

LIBVISO2: C++ Library for Visual Odometry 2

Paper:http://www.cvlibs.net/software/libviso/

Code:https://github.com/srv/viso2

Stereo Visual SLAM for Mobile Robots Navigation

A constant-time SLAM back-end in the continuum between global mapping and submapping: application to visual stereo SLAM
Paper:http://mapir.uma.es/famoreno/papers/thesis/FAMD_thesis.pdf

Code:https://github.com/famoreno/stereo-vo

Combining Edge Images and Depth Maps for Robust Visual Odometry

Robust Edge-based Visual Odometry using Machine-Learned Edges(REVO)
Paper:https://graz.pure.elsevier.com/

Code:https://github.com/fabianschenk/REVO

HKUST Aerial Robotics Group

VINS-Mono: A Robust and Versatile Monocular Visual-Inertial State Estimator
Paper:https://arxiv.org/pdf/1708.03852.pdf

Code:https://github.com/HKUST-Aerial-Robotics/VINS-Mono

VINS-Fusion:Online Temporal Calibration for Monocular Visual-Inertial Systems

Paper:https://arxiv.org/pdf/1808.00692.pdf

Code:https://github.com/HKUST-Aerial-Robotics/VINS-Fusion

Monocular Visual-Inertial State Estimation for Mobile Augmented Reality

Paper:https://ieeexplore.ieee.org/document/8115400

Code:https://github.com/HKUST-Aerial-Robotics/VINS-Mobile

Computer Vision Group TUM Department of Informatics Technical University of Munich

DSO: Direct Sparse Odometry
Code:https://github.com/JingeTu/StereoDSO

Visual-Inertial DSOhttps://vision.in.tum.de/research/vslam/vi-dso

DVSO:https://vision.in.tum.de/research/vslam/dvso
DSO with Loop-closure and Sim(3) pose graph optimization:https://vision.in.tum.de/research/vslam/ldso

Stereo odometry based on careful feature selection and tracking

Paper:https://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=7324219

Code:https://github.com/Mayankm96/Stereo-Odometry-SOFT

OKVIS: Open Keyframe-based Visual-Inertial SLAM

Code:https://github.com/gaoxiang12/okvis

Trifo-VIO: Robust and Efficient Stereo Visual Inertial Odometry using Points and Lines

Paper:https://arxiv.org/pdf/1803.02403.pdf

Code:https://github.com/UMiNS/Trifocal-tensor-VIO

PL-VIO: Tightly-Coupled Monocular Visual–Inertial Odometry Using Point and Line Features

Paper:https://www.mdpi.com/1424-8220/18/4/1159/html

Overview of visual inertial navigation

A Review of Visual-Inertial Simultaneous Localization and Mapping from Filtering-Based and Optimization-Based Perspectives:
https://ieeexplore.ieee.org/document/5423178

https://www.mdpi.com/2218-6581/7/3/45

Based CNN(Net VO or Net VSLAM)

VINet: Visual-Inertial Odometry as a Sequence-to-Sequence Learning Problem

Paper:https://arxiv.org/abs/1701.08376

Code:https://github.com/HTLife/VINet

DeepVO: Towards End-to-End Visual Odometry with Deep Recurrent Convolutional Neural Networks

Code:https://github.com/ildoonet/deepvo

https://github.com/sladebot/deepvo

https://github.com/themightyoarfish/deepVO

https://github.com/fshamshirdar/DeepVO (pytorch)

Paper:http://www.cs.ox.ac.uk/files/9026/DeepVO.pdf

UnDeepVO - Implementation of Monocular Visual Odometry through Unsupervised Deep Learning

Code:https://github.com/drmaj/UnDeepVO

Paper:UnDeepVO - Implementation of Monocular Visual Odometry through Unsupervised Deep Learning

(ESP-VO) End-to-End, Sequence-to-Sequence Probabilistic Visual Odometry through Deep Neural Networks

https://www.seas.upenn.edu/~meam620/slides/kinematicsI.pdf

Lidar Visual odometry

Lidar-Monocular Visual Odometry

Code:https://github.com/johannes-graeter/limo

Paper:https://arxiv.org/pdf/1807.07524.pdf

RGBD and LIDAR

Google’s cartographer. Available on ROS
Other open source projects
DynaSLAM A SLAM system robust in dynamic environments for monocular, stereo and RGB-D setups

openvslam A Versatile Visual SLAM Framework

cartographer

Code:https://github.com/googlecartographer/cartographer

Paper:https://google-cartographer.readthedocs.io/en/latest/

你可能感兴趣的:(VO)