本文更改自 吴桐wutong 微信公众号文章。
开源代码总览
名称 | 传感器类型 | 组合类型 | 滤波方法 | 备注 |
---|---|---|---|---|
RTKLIB | G | - | KF | GAMP、rtklibexplorer https://www.rtklib.com/ |
GPSTK | G | - | KF | https://github.com/SGL-UT/GPSTk |
BNC | G | - | KF | ppp_wizard |
KF_GINS | G、I | 松组合 | KF | OB_GINS https://github.com/i2Nav-WHU/KF-GINS/blob/main/README_CN.md |
PSINS | G、I | 紧组合 | KF | http://www.psins.org.cn |
OB_GINS | G、I | 松组合 | 图优化 | https://github.com/i2Nav-WHU/OB_GINS |
igNav | G、I | 紧组合 | 图优化 | rtklib https://github.com/Erensu/ignav |
LOAM | 2D 雷达 | - | https://github.com/laboshinl/loam_velodyne https://github.com/RobustFieldAutonomyLab/LeGO-LOAM |
|
LIO-SAM | G、I、3D雷达 | 图优化 | https://github.com/smilefacehh/LIO-SAM-DetailedNote | |
SVO | MV | 仅图像 | https://github.com/uzh-rpg/rpg_svo https://rpg.ifi.uzh.ch/svo_pro.html |
|
LSD-SLAM | MV、SV | 仅图像 | KF | https://github.com/tum-vision/lsd_slam https://github.com/apesIITM/lsd_slam_stereo |
ORB-SLAM | MV、 | 特征提取 | https://github.com/raulmur/ORB_SLAM | |
ORB-SLAM2 | MV、SV、RGB-D | Bundle Adjustment | https://github.com/electech6/ORB_SLAM2_detailed_comments https://github.com/raulmur/ORB_SLAM2 |
|
ORB-SLAM3 | MV、SV、RGB-D、I | 图优化 | https://github.com/electech6/ORB_SLAM3_detailed_comments https://github.com/UZ-SLAMLab/ORB_SLAM3 |
|
VINS-Mono | MV、I | 紧组合VI | 图优化 | https://github.com/HKUST-Aerial-Robotics/VINS-Mono |
VINS-Fusion | G、MV、SV、I | 松组合G,紧组合VI | 图优化 | https://github.com/HKUST-Aerial-Robotics/VINS-Fusion |
GVINS | G、MV、SV、I | 紧组合G,紧组合VI | 图优化 | https://github.com/HKUST-Aerial-Robotics/GVINS |
IC_GVINS | G、V、I | 紧组合G、紧组合VI、松组合GI | 图优化 | https://github.com/i2Nav-WHU/IC-GVINS |
InGVIO | G、V、I | 紧组合 | Invariant EKF | https://github.com/ChangwuLiu/InGVIO |
OpenVINS | MV、SV、I | 紧组合VI | MSCKF | https://github.com/rpng/open_vins https://docs.openvins.com/ |
(MonoV: 单目相机;StereoV: 双目相机;2D LiDAR; 3D LiDAR)
表中前半部分组合类型是以GNSS为准,即使用GNSS原始数据,则为紧组合。滤波方法中的图优化,其实就是最小二乘,但不仅仅是最小二乘。此外,视觉惯性里程计VIO的紧耦合是指:
由于作者不是很熟悉CV与SLAM,因此一些VSLAM的开源项目的分类不是那么确切。希望同行批评指正。
以下是各个项目的详细介绍。
An Open Source Program Package for GNSS Positioning
RTKLIB is an open source program package for standard and precise positioning with GNSS (global navigation satellite system). RTKLIB consists of a portable program library and several APs (application programs) utilizing the library.
GPSTK已更名为GNSSTK,并在github分为两个项目,分别为GNSSTK (libraries)和GNSSTK-APPS (applications)。The primary goals of the GPSTk project are to:
GPSTk核心库提供了许多在GNSS教科书和经典论文中发现的模型和算法,例如求解用户位置或估计大气折射。还支持RINEX等常见数据格式。
GPSTk库中有几种类型的函数:
全球定位系统(GPS)的时间。时间表示之间的转换,如MJD、GPS周和秒,以及许多其他。
星历表计算。广播和精确星历表的位置和时钟插补。
大气延迟模型。包括电离层和对流层模型。
位置的解决方案。包括接收机自主完整性监控算法的实现。
数学。包括矩阵和矢量实现,以及插值和数值积分。
GNSS数据结构。包含根据年代、卫星、来源和观测类型绘制的观测数据结构。还提供了适当的处理类,包括完整的“精确点定位”(PPP)处理链。
应用程序框架。包括处理命令行选项、提供交互式帮助和使用文件系统。
BNC是专门为实时定位而生的软件。大量的同学使用其作为实时精密单点定位的基础软件进行二次开发。
The BKG Ntrip Client (BNC) is an Open Source multi-stream client program designed for a variety of real-time GNSS applications.It was primarily designed for receiving data streams from any Ntrip supporting Broadcaster. The program handles the HTTP communication and transfers received GNSS data to a serial or IP port feeding networking software or a DGPS/RTK application. It can compute a real-time Precise Point Positioning (PPP) solution from RTCM streams or RINEX files. During the last years BNC has been enriched with RINEX quality and editing functions. You can run BNC with GUI as well as in batch processing mode.
PSINS(Precise Strapdown Inertial Navigation System,高精度捷联惯导算法)网站主要介绍高精度捷联惯导系统及其组合导航系统的算法原理和软件实现,由西北工业大学自动化学院惯性技术教研室严老师开发,Matlab & C++ 核心代码全部开源,本站同时还提供丰富的惯导原始数据和相关学习资料,宗旨是“致力于使专业实用的捷联惯导算法问题不再成为问题”。网站作者将尽最大努力以完善代码和数据资料的正确性、完整性和可靠性,但当网友将代码移植应用于正式产品时,作者不承诺它们总是有效的。
PSINS工具箱主要应用于捷联惯导系统的数据处理和算法验证开发,它包括
等功能。
三个项目均是武汉大学多源智能导航实验室(i2Nav)开源的多传感器融合的项目代码。不过遗憾的是,均是以松组合的形式融合GNSS数据。其中KF_GINS、OB_GINS分别为基于EKF和图优化的数据处理方法,其输入和输出相同,两套开源代码可以同时阅读,相互印证。IC-GINS则因为视觉的加入,同样需要基于ROS系统,但其代码与OB_GINS也有很多相通之处。
基于扩展卡尔曼滤波的GNSS/INS组合导航软件KF-GINS1。KF-GINS实现经典的GNSS位置和IMU数据的组合导航解算,算法实现参考牛小骥教授和陈起金博士的《惯性导航原理与GNSS/INS组合导航》课程讲义,作为课程的配套资源。软件采用C++语言编写,采用CMake管理项目。 KF-GINS的主要特点有:
Optimization-Based GNSS/INS Integrated Navigation System
We open-source OB_GINS, an optimization-based GNSS/INS integrated navigation system. The main features of OB_GINS are as follows:
ignav是基于rtklib二次开发的GNSS/INS紧组合算法。对于熟悉rtklib的同学,如果想扩展自己的技术路线,学习GNSS/INS组合导航相关算法,ignav是一个不错的选择。ignav基本将现有的一些组合算法均已实现,包括但不限于与RTK的紧组合/里程计辅助/非完整性约束等。而且ignav的编码风格与rtklib一脉相承,注释以及文档一应俱全,对初学者十分友好。
github简介:
IGNAV基于RTKLIB开发的INS/GNSS组合导航算法库,采用C语言编写;IGNAV主要功能包括:
LIO-SAM: Tightly-coupled Lidar Inertial Odometry via Smoothing and Mapping
A real-time lidar-inertial odometry package. We strongly recommend the users read this document thoroughly and test the package with the provided dataset first.
Laser Odometry and Mapping (Loam) is a realtime method for state estimation and mapping using a 3D lidar.
LeGO-LOAM: Lightweight and Ground-Optimized Lidar Odometry and Mapping on Variable Terrain
This repository contains code for a lightweight and ground optimized lidar odometry and mapping (LeGO-LOAM) system for ROS compatible UGVs. The system takes in point cloud from a Velodyne VLP-16 Lidar (palced horizontally) and optional IMU data as inputs. It outputs 6D pose estimation in real-time.
An updated lidar-initial odometry package, LIO-SAM, has been open-sourced and available for testing.
Semi-direct Visual Odometry
SVO: Fast Semi-Direct Monocular Visual Odometry
C. Forster, M. Pizzoli, D. Scaramuzza, SVO: Fast Semi-Direct Monocular Visual Odometry, IEEE International Conference on Robotics and Automation (ICRA), Hong Kong, 2014.
我们提出了一种半直接单目视觉里程计算法,它比目前最先进的方法精确、可靠和更快。半直接方法消除了昂贵的特征提取和鲁棒匹配技术的运动估计的需要。我们的算法直接操作像素强度,这导致亚像素精度在高帧率。采用显式模拟离群值测量的概率映射方法来估计3D点,从而获得更少的离群值和更可靠的点。精确和高帧率的运动估计在小的、重复的和高频纹理的场景中带来了更高的鲁棒性。该算法应用于gps拒绝环境下的微型飞行器状态估计,在机载嵌入式计算机上以每秒55帧的速度运行,在消费级笔记本电脑上以每秒300帧以上的速度运行。我们将我们的方法称为SVO(半直接视觉测程法),并将我们的实现作为开源软件发布。
What is SVO? SVO uses a semi-drect paradigm to estimate the 6-DOF motion of a camera system from both pixel intensities (direct) and features (without the necessity for time-consuming feature extraction and matching procedures), while achieving better accuracy by directly using the pixel intensities.
What does SVO Pro include? SVO Pro offers the following functionalities:
Robotics and Perception Group, University of Zurich http://rpg.ifi.uzh.ch/
LSD-SLAM是一种新颖的、直接的单目SLAM技术:它不使用关键点,而是直接对图像强度进行跟踪和映射。相机使用直接图像对齐进行跟踪,而几何形状以半密集深度图的形式进行估计,滤波经过多次像素立体比较得到。然后,我们构建关键帧的Sim(3)姿态图,这允许构建经过比例尺漂移校正的大比例尺地图,包括循环闭包。LSD-SLAM实时运行在CPU上,甚至在现代智能手机上。
We propose a novel Large-Scale Direct SLAM algorithm for stereo cameras (Stereo LSD-SLAM) that runs in real-time at high frame rate on standard CPUs. In contrast to sparse interest-point based methods, our approach aligns images directly based on the photoconsistency of all highcontrast pixels, including corners, edges and high texture areas. It concurrently estimates the depth at these pixels from two types of stereo cues: Static stereo through the fixed-baseline stereo camera setup as well as temporal multi-view stereo exploiting the camera motion. By incorporating both disparity sources, our algorithm can even estimate depth of pixels that are under-constrained when only using fixed-baseline stereo. Using a fixed baseline, on the other hand, avoids scale-drift that typically occurs in pure monocular SLAM. We furthermore propose a robust approach to enforce illumination invariance, capable of handling aggressive brightness changes between frames – greatly improving the performance in realistic settings. In experiments, we demonstrate state-of-the-art results on stereo SLAM benchmarks such as Kitti or challenging datasets from the EuRoC Challenge 3 for micro aerial vehicles.
ORB-SLAM: a Versatile and Accurate Monocular SLAM System
ORB-SLAM2: an Open-Source SLAM System for Monocular, Stereo and RGB-D Cameras
ORB-SLAM is a versatile and accurate Monocular SLAM solution able to compute in real-time the camera trajectory and a sparse 3D reconstruction of the scene in a wide variety of environments, ranging from small hand-held sequences to a car driven around several city blocks. It is able to close large loops and perform global relocalisation in real-time and from wide baselines.
See our project webpage: http://webdiis.unizar.es/~raulmur/orbslam/
ORB-SLAM2 is a real-time SLAM library for Monocular, Stereo and RGB-D cameras that computes the camera trajectory and a sparse 3D reconstruction (in the stereo and RGB-D case with true scale). It is able to detect loops and relocalize the camera in real time. We provide examples to run the SLAM system in the KITTI dataset as stereo or monocular, in the TUM dataset as RGB-D or monocular, and in the EuRoC dataset as stereo or monocular. We also provide a ROS node to process live monocular, stereo or RGB-D streams. The library can be compiled without ROS. ORB-SLAM2 provides a GUI to change between a SLAM Mode and Localization Mode, see section 9 of this document.
[Stereo and RGB-D] Raúl Mur-Artal and Juan D. Tardós. ORB-SLAM2: an Open-Source SLAM System for Monocular, Stereo and RGB-D Cameras. IEEE Transactions on Robotics, vol. 33, no. 5, pp. 1255-1262, 2017. PDF.
Raúl Mur-Artal, J. M. M. Montiel and Juan D. Tardós. ORB-SLAM: A Versatile and Accurate Monocular SLAM System. IEEE Transactions on Robotics, vol. 31, no. 5, pp. 1147-1163, 2015. (2015 IEEE Transactions on Robotics Best Paper Award). PDF.
ORB-SLAM3: An Accurate Open-Source Library for Visual, Visual–Inertial, and Multimap SLAM
ORB-SLAM3 is the first real-time SLAM library able to perform Visual, Visual-Inertial and Multi-Map SLAM with monocular, stereo and RGB-D cameras, using pin-hole and fisheye lens models. In all sensor configurations, ORB-SLAM3 is as robust as the best systems available in the literature, and significantly more accurate.
This software is based on ORB-SLAM2
[ORB-SLAM3] Carlos Campos, Richard Elvira, Juan J. Gómez Rodríguez, José M. M. Montiel and Juan D. Tardós, ORB-SLAM3: An Accurate Open-Source Library for Visual, Visual-Inertial and Multi-Map SLAM, IEEE Transactions on Robotics 37(6):1874-1890, Dec. 2021. PDF.
[IMU-Initialization] Carlos Campos, J. M. M. Montiel and Juan D. Tardós, Inertial-Only Optimization for Visual-Inertial Initialization, ICRA 2020. PDF
OpenVINS: A Research Platform for Visual-Inertial Estimation
Welcome to the OpenVINS project! The OpenVINS project houses some core computer vision code along with a state-of-the art filter-based visual-inertial estimator. The core filter is an Extended Kalman filter which fuses inertial information with sparse visual feature tracks. These visual feature tracks are fused leveraging the Multi-State Constraint Kalman Filter (MSCKF) sliding window formulation which allows for 3D features to update the state estimate without directly estimating the feature states in the filter. Inspired by graph-based optimization systems, the included filter has modularity allowing for convenient covariance management with a proper type-based state system. Please take a look at the feature list below for full details on what the system supports.
代码结构图
A Robust and Versatile Monocular Visual-Inertial State Estimator
VINS-Mono是一个用于单目视觉惯性系统的实时SLAM框架。它使用基于优化的滑动窗口公式提供高精度的视觉惯性里程计。它具有高效的IMU预集成,带有偏差校正,自动估计器初始化,在线外部校准,故障检测和恢复,环路检测,全局位姿图优化,映射合并,位姿图重用,在线时间校准,滚动快门支持。VINS-Mono主要用于自动无人机的状态估计和反馈控制,但它也能够为AR应用提供准确的定位。此代码运行在Linux上,并与ROS完全集成。iOS移动实现,请访问VINS-Mobile。
VINS-Mono: A Robust and Versatile Monocular Visual-Inertial State Estimator, Tong Qin, Peiliang Li, Zhenfei Yang, Shaojie Shen, IEEE Transactions on Roboticspdf
An optimization-based multi-sensor state estimator
VINS-Fusion是一种基于优化的多传感器状态估计器,可为自主应用(无人机、汽车和AR/VR)实现精确的自我定位。VINS-Fusion是VINS-Mono的扩展,支持多种视觉惯性传感器类型(单声道相机+ IMU,立体相机+ IMU,甚至仅立体相机)。我们还展示了一个融合VINS和GPS的玩具示例。特点:
Qin, T., Cao, S., Pan, J., & Shen, S. (2019, January 11). A General Optimization-based Framework for Global Pose Estimation with Multiple Sensors. arXiv. https://doi.org/10.48550/arXiv.1901.03642
Qin, T., Pan, J., Cao, S., & Shen, S. (2019, January 11). A General Optimization-based Framework for Local Odometry Estimation with Multiple Sensors. arXiv. https://doi.org/10.48550/arXiv.1901.03638
香港科技大学空中机器人组发布的最新的使用GNSS原始观测量的多传感器(GNSS/INS/camera)融合开源项目。其使用伪距和多普勒观测值,与INS以及视觉同时进行图优化,可以在全场景提供连续的平滑的6-Dof位姿。
GVINS is a non-linear optimization based system that tightly fuses GNSS raw measurements with visual and inertial information for real-time and drift-free state estimation. By incorporating GNSS pseudorange and Doppler shift measurements, GVINS is capable to provide smooth and consistent 6-DoF global localization in complex environment. The system framework and VIO part are adapted from VINS-Mono.
Our system contains the following features:
A Robust, Real-time, INS-Centric GNSS-Visual-Inertial Navigation System
Visual navigation systems are susceptible to complex environments, while inertial navigation systems (INS) are not affected by external factors. Hence, we present IC-GVINS, a robust, real-time, INS-centric global navigation satellite system (GNSS)-visual-inertial navigation system to fully utilize the INS advantages. The Earth rotation has been compensated in the INS to improve the accuracy of high-grade inertial measurement units (IMUs). To promote the system robustness in high-dynamic conditions, the precise INS information is employed to assist the feature tracking and landmark triangulation. With a GNSS-aided initialization, the IMU, visual, and GNSS measurements are tightly fused in a unified world frame within the factor graph optimization framework.
Authors: Hailiang Tang, Xiaoji Niu, and Tisheng Zhang from the Integrated and Intelligent Navigation (i2Nav) Group, Wuhan University.
Related Paper:
Xiaoji Niu, Hailiang Tang, Tisheng Zhang, Jing Fan, and Jingnan Liu, “IC-GVINS: A Robust, Real-time, INS-Centric GNSS-Visual-Inertial Navigation System,” IEEE Robotics and Automation Letters, 2022.
InGVIO是清华大学近期开源的一套多传感器数据融合项目,是基于invariant-EKF,紧融合GNSS伪距和多普勒,以及惯性传感器和单/双目视觉数据。从其论文来看,与当前基于图优化和基于EKF的算法相比, invariant-EKF在准确性和计算负载方面提供了极具竞争力的结果。与代码同时开源的,还有一组固定翼机载数据,下载地址请参考项目github。因为涉及到了视觉,所以ROS系统必不可少。好消息是InGVIO与GVINS使用相同的GNSS数据结构体gnss_comm,只需熟悉一次。
InGVIO is an invariant filter approach for fusion of monocular/stereo camera, IMU and raw GNSS measurements including pseudo ranges and Doppler shifts. InGVIO is intrinsically consistent under conditional infinitesimal invariance of the GNSS-Visual-Inertial system. InGVIO has the following key features:fast due to decoupled IMU propagation, key-frame marginalization strategy and no SLAM-features;accurate due to intrinsic consistency maintenance;better convergence properties than ‘naive’ EKF-based filters.
InGVIO是一种用于融合单目/立体摄像机、IMU和原始GNSS测量(包括伪距离和多普勒频移)的不变滤波方法。InGVIO在gnss -视觉-惯性系统条件无穷小不变性下具有本质一致性。InGVIO具有以下关键特性:由于解耦IMU传播,关键帧边缘化策略和无slam特征,速度快;由于内在一致性维护,精度高;比“naive”基于ekf的滤波器具有更好的收敛性能。
An invariant filter for visual-inertial-raw GNSS fusion.
Paper: InGVIO: A Consistent Invariant Filter For Fast and High-Accuracy GNSS-Visual-Inertial Odometry. Author:** Changwu Liu, Chen Jiang and Haowen Wang. https://arxiv.org/abs/2210.15145
GNSS算法相关开源代码(含多传感器融合相关项目) GNSS和自动驾驶 2022-12-11 20:48 发表于上海*