【点云数据处理】学习笔记

1.点云数据集

点云数据集(KITTI、NuScene、Lyft、Waymo、PandaSet等)在数据格式与3D坐标系上往往定义各不相同,各式各样的点云感知算法(point-based、 voxel-based、one-stage/two-stage等)

[KITTI] The KITTI Vision Benchmark Suite. [det.](大型场景)
[ModelNet] The Princeton ModelNet . [cls.](物体模型)
[ShapeNet] A collaborative dataset between researchers at Princeton, Stanford and TTIC. [seg.](物体模型)
[PartNet] The PartNet dataset provides fine grained part annotation of objects in ShapeNetCore. [seg.]
[PartNet] PartNet benchmark from Nanjing University and National University of Defense Technology. [seg.]
[S3DIS] The Stanford Large-Scale 3D Indoor Spaces Dataset. [seg.]
[ScanNet] Richly-annotated 3D Reconstructions of Indoor Scenes. [cls. seg.]
[Stanford 3D] The Stanford 3D Scanning Repository. [reg.]
[UWA Dataset] . [cls. seg. reg.]
[Princeton Shape Benchmark] The Princeton Shape Benchmark.
[SYDNEY URBAN OBJECTS DATASET] This dataset contains a variety of common urban road objects scanned with a Velodyne HDL-64E LIDAR, collected in the CBD of Sydney, Australia. There are 631 individual scans of objects across classes of vehicles, pedestrians, signs and trees. [cls. match.]
[ASL Datasets Repository(ETH)] This site is dedicated to provide datasets for the Robotics community with the aim to facilitate result evaluations and comparisons. [cls. match. reg. det]
[Large-Scale Point Cloud Classification Benchmark(ETH)] This benchmark closes the gap and provides a large labelled 3D point cloud data set of natural scenes with over 4 billion points in total. [cls.]
[Robotic 3D Scan Repository] The Canadian Planetary Emulation Terrain 3D Mapping Dataset is a collection of three-dimensional laser scans gathered at two unique planetary analogue rover test facilities in Canada.
[Radish] The Robotics Data Set Repository (Radish for short) provides a collection of standard robotics data sets.
[IQmulus & TerraMobilita Contest] The database contains 3D MLS data from a dense urban environment in Paris (France), composed of 300 million points. The acquisition was made in January 2013. [cls. seg. det.]
[Oakland 3-D Point Cloud Dataset] This repository contains labeled 3-D point cloud laser data collected from a moving platform in a urban environment.
[Robotic 3D Scan Repository] This repository provides 3D point clouds from robotic experiments,log files of robot runs and standard 3D data sets for the robotics community.
[Ford Campus Vision and Lidar Data Set] The dataset is collected by an autonomous ground vehicle testbed, based upon a modified Ford F-250 pickup truck.
[The Stanford Track Collection] This dataset contains about 14,000 labeled tracks of objects as observed in natural street scenes by a Velodyne HDL-64E S2 LIDAR.
[PASCAL3D+] Beyond PASCAL: A Benchmark for 3D Object Detection in the Wild. [pos. det.]
[3D MNIST] The aim of this dataset is to provide a simple way to get started with 3D computer vision problems such as 3D shape recognition. [cls.]
[WAD] [ApolloScape] The datasets are provided by Baidu Inc. [tra. seg. det.]
[nuScenes] The nuScenes dataset is a large-scale autonomous driving dataset.
[PreSIL] Depth information, semantic segmentation (images), point-wise segmentation (point clouds), ground point labels (point clouds), and detailed annotations for all vehicles and people. [paper] [det. aut.]
[3D Match] Keypoint Matching Benchmark, Geometric Registration Benchmark, RGB-D Reconstruction Datasets. [reg. rec. oth.]
[BLVD] (a) 3D detection, (b) 4D tracking, © 5D interactive event recognition and (d) 5D intention prediction. [ICRA 2019 paper] [det. tra. aut. oth.]
[PedX] 3D Pose Estimation of Pedestrians, more than 5,000 pairs of high-resolution (12MP) stereo images and LiDAR data along with providing 2D and 3D labels of pedestrians. [ICRA 2019 paper] [pos. aut.]
[H3D] Full-surround 3D multi-object detection and tracking dataset. [ICRA 2019 paper] [det. tra. aut.]
[Argoverse BY ARGO AI] Two public datasets (3D Tracking and Motion Forecasting) supported by highly detailed maps to test, experiment, and teach self-driving vehicles how to understand the world around them.[CVPR 2019 paper][tra. aut.]
[Matterport3D] RGB-D: 10,800 panoramic views from 194,400 RGB-D images. Annotations: surface reconstructions, camera poses, and 2D and 3D semantic segmentations. Keypoint matching, view overlap prediction, normal prediction from color, semantic segmentation, and scene classification. [3DV 2017 paper] [code] [blog]
[SynthCity] SynthCity is a 367.9M point synthetic full colour Mobile Laser Scanning point cloud. Nine categories. [seg. aut.]
[Lyft Level 5] Include high quality, human-labelled 3D bounding boxes of traffic agents, an underlying HD spatial semantic map. [det. seg. aut.]
[SemanticKITTI] Sequential Semantic Segmentation, 28 classes, for autonomous driving. All sequences of KITTI odometry labeled. [ICCV 2019 paper] [seg. oth. aut.]
[NPM3D] The Paris-Lille-3D has been produced by a Mobile Laser System (MLS) in two different cities in France (Paris and Lille). [seg.]
[The Waymo Open Dataset] The Waymo Open Dataset is comprised of high resolution sensor data collected by Waymo self-driving cars in a wide variety of conditions. [det.]
[A3D: An Autonomous Driving Dataset in Challeging Environments] A3D: An Autonomous Driving Dataset in Challeging Environments. [det.]
[PointDA-10 Dataset] Domain Adaptation for point clouds.
[Oxford Robotcar] The dataset captures many different combinations of weather, traffic and pedestrians. [cls. det. rec.]
[WHU-TLS BENCHMARK] WHU-TLS benchmark dataset. [reg.]
[DALES] DALES: A Large-scale Aerial LiDAR Data Set for Semantic Segmentation. [seg.]
[DynLab DATASET]MultiBodySync: Multi-Body Segmentation and Motion Estimation via 3D Scan Synchronization
[4DComplete]4DComplete: Non-Rigid Motion Estimation Beyond the Observable Surface(4D重建)

2.OpenPCDet

OpenPCDet:开源库,用于基于 LiDAR 的 3D 对象检测。
OpenPCDet是基于 PyTorch 的通用代码库,用于从点云进行 3D 对象检测。它目前支持多种最先进的 3D 对象检测方法,并为一阶段和两阶段 3D 检测框架提供高度重构的代码。
源码:https://github.com/open-mmlab/OpenPCDet

参考文献

  1. Github:最新点云数据分析
  2. OpenPCDet

你可能感兴趣的:(pytorch,自动驾驶,人工智能,机器学习)