论文阅读 [TPAMI-2022] Fine-Grained Human-Centric Tracklet Segmentation with Single Frame Supervision

论文阅读 [TPAMI-2022] Fine-Grained Human-Centric Tracklet Segmentation with Single Frame Supervision

论文搜索(studyai.com)

搜索论文: Fine-Grained Human-Centric Tracklet Segmentation with Single Frame Supervision

搜索论文: http://www.studyai.com/search/whole-site/?q=Fine-Grained+Human-Centric+Tracklet+Segmentation+with+Single+Frame+Supervision

关键字(Keywords)

Labeling; Object segmentation; Image segmentation; Task analysis; Semantics; Training; Face; Video object segmentation; human-centric; fine-grained; optical flow estimation

机器视觉

检测分割; 细粒度视觉; 光流

摘要(Abstract)

In this paper, we target at the Fine-grAined human-Centric Tracklet Segmentation (FACTS) problem, where 12 human parts, e.g., face, pants, left-leg, are segmented.

在本文中,我们针对细粒度的以人为中心的轨迹分割(FACTS)问题,将人脸、裤子、左腿等12个人体部位进行分割。.

To reduce the heavy and tedious labeling efforts, FACTS requires only one labeled frame per video during training.

为了减少繁重而繁琐的标记工作,FACTS在训练期间只需要每个视频一个标记帧。.

The small size of human parts and the labeling scarcity makes FACTS very challenging.

人体器官的小尺寸和标签的稀缺性使得事实非常具有挑战性。.

Considering adjacent frames of videos are continuous and human usually do not change clothes in a short time, we explicitly consider the pixel-level and frame-level context in the proposed Temporal Context segmentation Network (TCNet).

考虑到相邻帧的视频是连续的和人类通常不改变衣服在短时间内,我们明确地考虑像素水平和帧级上下文中所提出的时间上下文分割网络(TCNET)。.

On the one hand, optical flow is on-line calculated to propagate the pixel-level segmentation results to neighboring frames.

一方面,在线计算光流,将像素级分割结果传播到相邻帧。.

On the other hand, frame-level classification likelihood vectors are also propagated to nearby frames.

另一方面,帧级分类似然向量也传播到附近的帧。.

By fully exploiting the pixel-level and frame-level context, TCNet indirectly uses the large amount of unlabeled frames during training and produces smooth segmentation results during inference.

通过充分利用像素级和帧级上下文,TCNet在训练过程中间接使用大量未标记的帧,并在推理过程中产生平滑的分割结果。.

Experimental results on four video datasets show the superiority of TCNet over the state-of-the-arts.

在四个视频数据集上的实验结果表明,TCNet优于现有技术。.

The newly annotated datasets can be downloaded via http://liusi-group.com/projects/FACTS for the further studies…

新注释的数据集可以通过http://liusi-group.com/projects/FACTS为了进一步的研究。。.

作者(Authors)

[‘Si Liu’, ‘Guanghui Ren’, ‘Yao Sun’, ‘Jinqiao Wang’, ‘Changhu Wang’, ‘Bo Li’, ‘Shuicheng Yan’]

你可能感兴趣的:(人工智能,深度学习,机器学习,计算机视觉,CVPR)