[点云补全]-Point Cloud Completion by Skip-attention Network with Hierarchical Folding

Point Cloud Completion by Skip-attention Network with Hierarchical Folding

CVPR 2020
利用多级Folding结构和skip-attention来进行点云补齐
Folding结构是之前就有的,本文主要是利用skip-attention将folding结构堆叠起来,更加深了,有点像受到了ResNet和DeepGCNs的启发。

摘要

原文 译文
Point cloud completion aims to infer the complete geometries for missing regions of 3D objects from incomplete ones. 点云补齐的目的是从不完整的点云推断出缺失部分的几何形状
Previous methods usually predict the complete point cloud based on the global shape representation extracted from the incomplete input. 之前的方法通常是从不完整点云的全局形状特征预测完整的点云
However, the global representation often suffers from the information loss of structure details on local regions of incomplete point cloud. 存在的问题在于,从不完整点云中获取的全局特征缺少局部细节结构。
To address this problem, we propose Skip-Attention Network (SA-Net) for 3D point cloud completion. Our main contributions lie in the following two-folds. 本文提出了SA-Net,有2个创新点
First, we propose a skip-attention mechanism to effectively exploit the local structure details of incomplete point clouds during the inference of missing parts. The skip-attention mechanism selectively conveys geometric information from the local regions of in-complete point clouds for the generation of complete ones at different resolutions, where the skip-attention reveals the completion process in an interpretable way. 首先,skip-attention机制可以有效的捕捉缺失部分的局部结构细节特征,并且skip-attention可以有选择的在不同的分别率下强调确实部分的几何特征,同时skip-attention在点云补全过程是一种可解释性的。
Second, in order to fully utilize the selected geometric information encoded by skip-attention mechanism at different resolutions, we propose a novel structure-preserving decoder with hierarchical folding for complete shape generation. The hierarchical folding preserves the structure of complete point cloud generated in upper layer by progressively detailing the local regions, using the skip-attentioned geometry at the same resolution. 第二点,为了充分利用skip-attention得到的重点选择的几何信息,本文提出一种保留结构的decoder,多层folding结构逐步对局部区域细节化。
We conduct comprehensive experiments on ShapeNet and KITTI datasets, which demonstrate that the proposed SA-Net outperforms the state-of-the-art point cloud completion methods. 在ShapeNet和KITTI上做了补全对比的实验。

Folding Block的 self-attention

Floding Block的结构图如下
[点云补全]-Point Cloud Completion by Skip-attention Network with Hierarchical Folding_第1张图片
图中左下角就是self-attention的结构,这是一个很常见的点云self-attention结构。
通过两个MLP:h和l将输入特征变换到一个维度,然后转置相乘,再softmax归一化得到attention score/weight
在这里插入图片描述
然后再利用一个MLP:g 乘以得到的权重,最后利用一个残差结构加上最初的特征得到self-attention后的特征。
在这里插入图片描述

Skip-attention

skip-attention是本文的亮点创新。它将encoder得到的特征加权地变成decoder的输入,将encoder和decoder联系了起来,之前pointnet++进行semantic segmentation的时候都是直接把encoder的特征skip concatenate到decoder的输入。
作者阐述的skip-attention的作用:

  1. 当生成的点在不完整的点区域内时,skip-attention可以帮助decoder更好的恢复特征
  2. 当生成的点不在完整点云的区域内时,skip-attention查找已有点云的相似区域,然后利用已知相似点来恢复未知点。

示意图
[点云补全]-Point Cloud Completion by Skip-attention Network with Hierarchical Folding_第2张图片
作者给出了两种skip-attention的实现方式,第一种和上面的一样,用MLP来实现。
第二种时计算特征之前的余弦相似性作为attention score
在这里插入图片描述

实验

Effect of attention

为了验证attention的作用,作者做了下面3个对比实验

  1. No-skip
  2. Skip-L
  3. Fold-C

得到的结果是Skip-Attention用Cossine,Fold-Attention用Learned方式最好。

Visualization of skip-attention

[点云补全]-Point Cloud Completion by Skip-attention Network with Hierarchical Folding_第3张图片

skip attention扩展

最后,作者还把本文的skip-attention拓展到semantic segmentation和unsurpervised shape classification,都取得了improvement.

你可能感兴趣的:(点云识别)