97项开源视觉SLAM方案(二)

原文链接:https://zhuanlan.zhihu.com/p/115599978 未经作者允许,禁止二次转载

微信扫码,回复:开源SLAM,即可获取全文文档(共97个方案,本文28个)

本文简单将各种方案分为以下 7 类(固然有不少文章无法恰当分类,比如动态语义稠密建图的 VISLAM +_+):

  • Geometric SLAM

  • Semantic / Deep SLAM

  • Multi-Landmarks / Object SLAM

  • Sensor Fusion

  • Dynamic SLAM

  • Mapping

  • Optimization

本文将列出15个Semantic/Deep SLAM开源方案和13个Multi-Landmarks/Object SLAM开源方案。

Semantic / Deep SLAM


1. MsakFusion

· 论文:Runz M, Buffier M, Agapito L. Maskfusion: Real-time recognition, tracking and reconstruction of multiple moving objects[C]//2018 IEEE International Symposium on Mixed and Augmented Reality (ISMAR). IEEE, 2018: 10-20.

· 代码:https://github.com/martinruenz/maskfusion

2. SemanticFusion

· 论文:McCormac J, Handa A, Davison A, et al. Semanticfusion: Dense 3d semantic mapping with convolutional neural networks[C]//2017 IEEE International Conference on Robotics and automation (ICRA). IEEE, 2017: 4628-4635.

· 代码:https://github.com/seaun163/semanticfusion

3. semantic_3d_mapping

· 论文:Yang S, Huang Y, Scherer S. Semantic 3D occupancy mapping through efficient high order CRFs[C]//2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, 2017: 590-597.

· 代码:https://github.com/shichaoy/semantic_3d_mapping

4. Kimera(实时度量与语义定位建图开源库)

· 论文:Rosinol A, Abate M, Chang Y, et al. Kimera: an Open-Source Library for Real-Time Metric-Semantic Localization and Mapping[J]. arXiv preprint arXiv:1910.02490, 2019.

· 代码:https://github.com/MIT-SPARK/Kimera ;演示视频

5. NeuroSLAM(脑启发式 SLAM)

             · 论文:Yu F, Shang J, Hu Y, et al. NeuroSLAM: a brain-inspired SLAM system for 3D environments[J]. Biological Cybernetics, 2019: 1-31.

· 代码:https://github.com/cognav/NeuroSLAM

· 第四作者就是 Rat SLAM 的作者,文章也比较了十余种脑启发式的 SLAM

6. gradSLAM(自动分区的稠密 SLAM)

· 论文:Jatavallabhula K M, Iyer G, Paull L. gradSLAM: Dense SLAM meets Automatic Differentiation[J]. arXiv preprint arXiv:1910.10672, 2019.

· 代码(预计 20 年 4 月放出):https://github.com/montrealrobotics/gradSLAM ;项目主页,演示视频

7. ORB-SLAM2 + 目标检测/分割的方案语义建图

· https://github.com/floatlazer/semantic_slam

· https://github.com/qixuxiang/orb-slam2_with_semantic_labelling

· https://github.com/Ewenwan/ORB_SLAM2_SSD_Semantic

8. SIVO(语义辅助特征选择)

· 论文:Ganti P, Waslander S. Network Uncertainty Informed Semantic Feature Selection for Visual SLAM[C]//2019 16th Conference on Computer and Robot Vision (CRV). IEEE, 2019: 121-128.

· 代码:https://github.com/navganti/SIVO

9. FILD(临近图增量式闭环检测)

· 论文:Shan An, Guangfu Che, Fangru Zhou, Xianglong Liu, Xin Ma, Yu Chen. Fast and Incremental Loop Closure Detection using Proximity Graphs. pp. 378-385, The 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2019)

· 代码:https://github.com/AnshanTJU/FILD

10. object-detection-sptam(目标检测与双目 SLAM)

· 论文:Pire T, Corti J, Grinblat G. Online Object Detection and Localization on Stereo Visual SLAM System[J]. Journal of Intelligent & Robotic Systems, 2019: 1-10.

· 代码:https://github.com/CIFASIS/object-detection-sptam

11. Map Slammer(单目深度估计 + SLAM)

· 论文:Torres-Camara J M, Escalona F, Gomez-Donoso F, et al. Map Slammer: Densifying Scattered KSLAM 3D Maps with Estimated Depth[C]//Iberian Robotics conference. Springer, Cham, 2019: 563-574.

· 代码:https://github.com/jmtc7/mapSlammer

12. NOLBO(变分模型的概率 SLAM)

· 论文:Yu H, Lee B. Not Only Look But Observe: Variational Observation Model of Scene-Level 3D Multi-Object Understanding for Probabilistic SLAM[J]. arXiv preprint arXiv:1907.09760, 2019.

· 代码:https://github.com/bogus2000/NOLBO

13. GCNv2_SLAM (基于图卷积神经网络 SLAM)

· 论文:Tang J, Ericson L, Folkesson J, et al. GCNv2: Efficient correspondence prediction for real-time SLAM[J]. IEEE Robotics and Automation Letters, 2019, 4(4): 3505-3512.

· 代码:https://github.com/jiexiong2016/GCNv2_SLAM   Video

14. semantic_suma(激光语义建图)

· 论文:Chen X, Milioto A, Palazzolo E, et al. SuMa++: Efficient LiDAR-based semantic SLAM[C]//2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, 2019: 4530-4537.

· 代码:https://github.com/PRBonn/semantic_suma/ ;Video

15. Neural-SLAM(主动神经 SLAM)

· 论文:Chaplot D S, Gandhi D, Gupta S, et al. Learning to explore using active neural slam[C]. ICLR 2020.

· 代码:https://github.com/devendrachaplot/Neural-SLAM

 

Multi-Landmarks / Object SLAM


1. PL-SVO(点线 SVO)

· 论文:Gomez-Ojeda R, Briales J, Gonzalez-Jimenez J. PL-SVO: Semi-direct Monocular Visual Odometry by combining points and line segments[C]//Intelligent Robots and Systems (IROS), 2016 IEEE/RSJ International Conference on. IEEE, 2016: 4211-4216.

· 代码:https://github.com/rubengooj/pl-svo

2. stvo-pl(双目点线 VO)

· 论文:Gomez-Ojeda R, Gonzalez-Jimenez J. Robust stereo visual odometry through a probabilistic combination of points and line segments[C]//2016 IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2016: 2521-2526.

· 代码:https://github.com/rubengooj/stvo-pl

3. PL-SLAM(点线 SLAM)

· 论文:Gomez-Ojeda R, Zuñiga-Noël D, Moreno F A, et al. PL-SLAM: a Stereo SLAM System through the Combination of Points and Line Segments[J]. arXiv preprint arXiv:1705.09479, 2017.

· 代码:https://github.com/rubengooj/pl-slam

· Gomez-Ojeda R, Moreno F A, Zuñiga-Noël D, et al. PL-SLAM: a stereo SLAM system through the combination of points and line segments[J]. IEEE Transactions on Robotics, 2019, 35(3): 734-746.

4. PL-VIO

· 论文:He Y, Zhao J, Guo Y, et al. PL-VIO: Tightly-coupled monocular visual–inertial odometry using point and line features[J]. Sensors, 2018, 18(4): 1159.

· 代码:https://github.com/HeYijia/PL-VIO

· VINS + 线段:https://github.com/Jichao-Peng/VINS-Mono-Optimization

5. lld-slam(用于 SLAM 的可学习型线段描述符)

· 论文:Vakhitov A, Lempitsky V. Learnable line segment descriptor for visual SLAM[J]. IEEE Access, 2019, 7: 39923-39934.

· 代码:https://github.com/alexandervakhitov/lld-slam ;Video

点线结合的工作还有很多,国内的比如

· 上交邹丹平老师的 Zou D, Wu Y, Pei L, et al. StructVIO: visual-inertial odometry with structural regularity of man-made environments[J]. IEEE Transactions on Robotics, 2019, 35(4): 999-1013.

· 浙大的 Zuo X, Xie X, Liu Y, et al. Robust visual SLAM with point and line features[C]//2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, 2017: 1775-1782.

6. PlaneSLAM

· 论文:Wietrzykowski J. On the representation of planes for efficient graph-based slam with high-level features[J]. Journal of Automation Mobile Robotics and Intelligent Systems, 2016, 10.

· 代码:https://github.com/LRMPUT/PlaneSLAM

· 作者另外一项开源代码,没有找到对应的论文:https://github.com/LRMPUT/PUTSLAM

7. Eigen-Factors(特征因子平面对齐)

· 论文:Ferrer G. Eigen-Factors: Plane Estimation for Multi-Frame and Time-Continuous Point Cloud Alignment[C]//2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, 2019: 1278-1284.

· 代码:https://gitlab.com/gferrer/eigen-factors-iros2019 ;演示视频

8. PlaneLoc

· 论文:Wietrzykowski J, Skrzypczyński P. PlaneLoc: Probabilistic global localization in 3-D using local planar features[J]. Robotics and Autonomous Systems, 2019, 113: 160-173.

· 代码:https://github.com/LRMPUT/PlaneLoc

9. Pop-up SLAM

· 论文:Yang S, Song Y, Kaess M, et al. Pop-up slam: Semantic monocular plane slam for low-texture environments[C]//2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, 2016: 1222-1229.

· 代码:https://github.com/shichaoy/pop_up_slam

10. Object SLAM

· 论文:Mu B, Liu S Y, Paull L, et al. Slam with objects using a nonparametric pose graph[C]//2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, 2016: 4602-4609.

· 代码:https://github.com/BeipengMu/objectSLAM ;Video

11. voxblox-plusplus(物体级体素建图)

· 论文:Grinvald M, Furrer F, Novkovic T, et al. Volumetric instance-aware semantic mapping and 3D object discovery[J]. IEEE Robotics and Automation Letters, 2019, 4(3): 3037-3044.

· 代码:https://github.com/ethz-asl/voxblox-plusplus

12. Cube SLAM

· 论文:Yang S, Scherer S. Cubeslam: Monocular 3-d object slam[J]. IEEE Transactions on Robotics, 2019, 35(4): 925-938.

· 代码:https://github.com/shichaoy/cube_slam

· 对,这就是带我入坑的一项工作,2018 年 11 月份看到这篇论文(当时是预印版)之后开始学习物体级 SLAM,个人对 Cube SLAM 的一些注释和总结:链接。

· 也有很多有意思的但没开源的物体级 SLAM

o Ok K, Liu K, Frey K, et al. Robust Object-based SLAM for High-speed Autonomous Navigation[C]//2019 International Conference on Robotics and Automation (ICRA). IEEE, 2019: 669-675.

o Li J, Meger D, Dudek G. Semantic Mapping for View-Invariant Relocalization[C]//2019 International Conference on Robotics and Automation (ICRA). IEEE, 2019: 7108-7115.

o Nicholson L, Milford M, Sünderhauf N. Quadricslam: Dual quadrics from object detections as landmarks in object-oriented slam[J]. IEEE Robotics and Automation Letters, 2018, 4(1): 1-8.

13. VPS-SLAM(平面语义 SLAM)

· 论文:Bavle H, De La Puente P, How J, et al. VPS-SLAM: Visual Planar Semantic SLAM for Aerial Robotic Systems[J]. IEEE Access, 2020.

· 代码:https://bitbucket.org/hridaybavle/semantic_slam/src/master/

微信扫码,回复:开源SLAM,即可获取全文文档(共97个方案,本文28个)

你可能感兴趣的:(SLAM)