德国慕尼黑工业大学(TUM)计算机视觉组

http://vision.in.tum.de/research/vslam/lsdslam
Home  Research  Visual SLAM  LSD-SLAM: Large-Scale Direct Monocular SLAM

LSD-SLAM: Large-Scale Direct Monocular SLAM

Contact: Jakob Engel, Dr. Jörg Stückler, Prof. Dr. Daniel Cremers

Check out DSO, our new Direct & Sparse Visual Odometry Method published in July 2016 here: DSO: Direct Sparse Odometry

LSD-SLAM is a novel, direct monocular SLAM technique: Instead of using keypoints, it directly operates on image intensities both for tracking and mapping. The camera is tracked using direct image alignment, while geometry is estimated in the form of semi-dense depth maps, obtained by filtering over many pixelwise stereo comparisons. We then build a Sim(3) pose-graph of keyframes, which allows to build scale-drift corrected, large-scale maps including loop-closures. LSD-SLAM runs in real-time on a CPU, and even on a modern smartphone.

Code Available (see below)!

width="640" height="360" src="http://www.youtube.com/embed/GnuQzP3gty4" frameborder="0" allowfullscreen="" style="overflow: visible; line-height: 1.4em;">



Difference to keypoint-based methods

德国慕尼黑工业大学(TUM)计算机视觉组_第1张图片

As direct method, LSD-SLAM uses all information in the image, including e.g. edges – while keypoint-based approaches can only use small patches around corners. This leads to higher accuracy and more robustness in sparsely textured environments (e.g. indoors), and a much denser 3D reconstruction. Further, as the proposed piselwise depth-filters incorporate many small-baseline stereo comparisons instead of only few large-baseline frames, there are much less outliers.



Building a global map

德国慕尼黑工业大学(TUM)计算机视觉组_第2张图片   (click on the images for full resolution)

LSD-SLAM builds a pose-graph of keyframes, each containing an estimated semi-dense depth map. Using a novel direct image alignment forumlation, we directly track Sim(3)-constraints between keyframes (i.e., rigid body motion + scale), which are used to build a pose-graph which is then optimized. This formulation allows to detect and correct substantial scale-drift after large loop-closures, and to deal with large scale-variation within the same map.



Mobile Implementation

The approach even runs on a smartphone, where it can be used for AR. The estimated semi-dense depth maps are in-painted and completed with an estimated ground-plane, which then allows to implement basic physical interaction with the environment.

width="640" height="360" src="http://www.youtube.com/embed/X0hx2vxxTMg" frameborder="0" allowfullscreen="" style="overflow: visible; line-height: 1.4em;">



Stereo LSD-SLAM

We propose a novel Large-Scale Direct SLAM algorithm for stereo cameras (Stereo LSD-SLAM) that runs in real-time at high frame rate on standard CPUs. See below for the full publication.

width="640" height="360" src="http://www.youtube.com/embed/oJt3Ln8H03s" frameborder="0" allowfullscreen="" style="overflow: visible; line-height: 1.4em;">



Omnidirectional LSD-SLAM

We propose a real-time, direct monocular SLAM method for omnidirectional or wide field-of-view fisheye cameras. Both tracking (direct image alignment) and mapping (pixel-wise distance filtering) are directly formulated for the unified omnidirectional model, which can model central imaging devices with a field of view well above 150°. The dataset used for the evaluation can be found  here. See below for the full publication.

width="640" height="360" src="http://www.youtube.com/embed/v0NqMm7Q6S8" frameborder="0" allowfullscreen="" style="overflow: visible; line-height: 1.4em;">



Software

LSD-SLAM is on github:  http://github.com/tum-vision/lsd_slam

We support only ROS-based build system tested on Ubuntu 12.04 or 14.04 and ROS Indigo or Fuerte. However, ROS is only used for input (video), output (pointcloud & poses) and parameter handling; ROS-dependent code is tightly wrapped and can easily be replaced. To avoid overhead from maintaining different build-systems however, we do not offer an out-of-the-box ROS-free version. Android-specific optimizations and AR integration are not part of the open-source release.

Detailled installation and usage instructions can be found in the README.md, including descriptions of the most important parameters. For best results, we recommend using a monochrome global-shutter camera with fisheye lens.

If you use our code, please cite our respective publications (see below). We are excited to see what you do with LSD-SLAM, if you want drop us a quick hint if you have nice videos / pictures / models / applications.



Datasets

To get you started, we provide some example sequences including the input video and camera calibration, the complete generated pointcloud to be displayed with the  lsd_slam_viewer, as well as a (sparsified) pointcloud as .ply, which can be displayed e.g. using meshlab.

Hint: Use rosbag play -r 25 X_pc.bag while the lsd_slam_viewer is running to replay the result of real-time SLAM at 25x speed, building up the full reconstruction whithin seconds.

  • Desk Sequence (0:55min, 640×480 @ 50fps)
    • width="640" height="360" src="http://www.youtube.com/embed/UacKN2WDLCg" frameborder="0" allowfullscreen="" style="overflow: visible; line-height: 1.4em;">
    • Video:  [.bag]  [.png]
    • Pointcloud:  [.bag]  [.ply]
  • Machine Sequence (2:20min, 640×480 @ 50fps)
    • width="640" height="360" src="http://www.youtube.com/embed/6KRlwqubLIU" frameborder="0" allowfullscreen="" style="overflow: visible; line-height: 1.4em;">
    • Download Video:  [.bag]  [.png]
    • Download Pointcloud:  [.bag]  [.ply]
  • Foodcourt Sequence (12min, 640×480 @ 50fps)
    • width="640" height="360" src="http://www.youtube.com/embed/aBVXfqumTXc" frameborder="0" allowfullscreen="" style="overflow: visible; line-height: 1.4em;">
    • Download Video:  [.bag]  [.png]
    • Download Pointcloud:  [.bag]  [.ply]
  • ECCV Sequence (7:00min, 640×480 @ 50fps)
    • width="640" height="360" src="http://www.youtube.com/embed/isHXcv_AeFg" frameborder="0" allowfullscreen="" style="overflow: visible; line-height: 1.4em;">
    • Enable FabMap for large loop-closures for this sequence!
    • Video:  [.bag]  [.png]
    • Pointcloud:  [.bag]  [.ply]



License

LSD-SLAM is released under the GPLv3 license. A professional version under a different licensing agreement intended for commercial use is available  here. Please contact us if you are interested.

Related publications

Conference and Workshop Papers
2015
德国慕尼黑工业大学(TUM)计算机视觉组_第3张图片 Reconstructing Street-Scenes in Real-Time From a Driving Car (V. Usenko, J. Engel, J. Stueckler, D. Cremers)In Proc. of the Int. Conference on 3D Vision (3DV), 2015. [bib] [pdf]
德国慕尼黑工业大学(TUM)计算机视觉组_第4张图片 Large-Scale Direct SLAM for Omnidirectional Cameras (D. Caruso, J. Engel, D. Cremers)In International Conference on Intelligent Robots and Systems (IROS), 2015. [bib] [pdf] [video]
德国慕尼黑工业大学(TUM)计算机视觉组_第5张图片 Large-Scale Direct SLAM with Stereo Cameras (J. Engel, J. Stueckler, D. Cremers)In International Conference on Intelligent Robots and Systems (IROS), 2015. [bib] [pdf] [video]
2014
德国慕尼黑工业大学(TUM)计算机视觉组_第6张图片 Semi-Dense Visual Odometry for AR on a Smartphone (T. Schöps, J. Engel, D. Cremers)In International Symposium on Mixed and Augmented Reality, 2014. [bib] [pdf] [video]Best Short Paper Award
德国慕尼黑工业大学(TUM)计算机视觉组_第7张图片 LSD-SLAM: Large-Scale Direct Monocular SLAM (J. Engel, T. Schöps, D. Cremers)In European Conference on Computer Vision (ECCV), 2014. [bib] [pdf] [video]Oral Presentation
2013
德国慕尼黑工业大学(TUM)计算机视觉组_第8张图片 Semi-Dense Visual Odometry for a Monocular Camera (J. Engel, J. Sturm, D. Cremers)In IEEE International Conference on Computer Vision (ICCV), 2013. [bib] [pdf] [video]

你可能感兴趣的:(计算机视觉,linux系统)