弱监督学习文章列表

https://zhuanlan.zhihu.com/p/23811946

1,  Di Lin, Jifeng Dai, Jiaya Jia, Kaiming He, and Jian Sun."ScribbleSup: Scribble-Supervised Convolutional Networks for Semantic Segmentation". IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016

2,  Pathak, Deepak, Philipp Krahenbuhl, and Trevor Darrell. "Constrained convolutional neural networks for weakly supervised segmentation."Proceedings of the IEEE International Conference on Computer Vision. 2015.

3,  Papandreou, George, et al. "Weakly-and semi-supervised learning of a DCNN for semantic image segmentation."arXiv preprint arXiv:1502.02734(2015).

4, Xu, Jia, Alexander G. Schwing, and Raquel Urtasun. "Learning to segment under various forms of weak supervision."Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2015.




转自知乎 作者: travelsea

一, 基于bouding box的学习

1, Dai, Jifeng, Kaiming He, and Jian Sun. "Boxsup: Exploiting bounding boxes to supervise

convolutional networks for semantic segmentation." ICCV.  2015.

Abstract: propose a method only using bounding box annotation. The basic idea is to iterate between automatically generating region proposals and training CNNs. Good results obtained on PASCAL-2012 and PASCAL-CONTEXT.

2,Rajchl, Martin, et al. "DeepCut: Object Segmentation from Bounding Box Annotations using Convolutional Neural Networks." arXiv preprint arXiv:1605.07866 (2016).

Abstract: An extension from GrabCut method. The problem is formulated as an energy minimization problem over a densely-connected CRF and iteratively update the training targets. Applied this method to brain and lung segmentation problems on fetal MRI and obtained encouraging results.

二,基于scribbles的学习

1,Çiçek, Özgün, et al. "3d u-net: learning dense volumetric segmentation from sparse annotation." MICCAI 2016.

Abstract: Introduced a network for volumetric segmentation that learns from sparsely annotated volumetric images. It extended the U-Net to 3D and performs on-th-fly elastic deformation for efficient data agumentation during training.

2,Lin, Di, et al. "ScribbleSup: Scribble-Supervised Convolutional Networks for Semantic Segmentation." arXiv preprint arXiv:1604.05144 (2016).

Abstract: The algorithm is based on a graphic model that jointly propagates information from scribbles to unmarked pixels and learns network parameters. Excellent results were shown on PASCAL VOC and PASCAL CONTEXT datasets.

三, 基于image-tags的学习

1,Pathak, Deepak, Philipp Krahenbuhl, and Trevor Darrell. "Constrained convolutional neural

networks for weakly supervised segmentation." ICCV 2015.

Abstract: Present an approach to learn dense pixel-wise labeling from image-level tags. Each image-level tag imposes constraints on the output labeling of a CNN classifier. Extensitive experiments demonstrate the generality of this new learning framework.

2, Vezhnevets, Alexander, and Joachim M. Buhmann. "Towards weakly supervised semantic segmentation

by means of multiple instance and multitask learning." CVPR 2010.

Abstract: Semantic Texton Forest (STF) is used as the basic framework and extended for the Multiple Instance Leraning setting. Multitask learning (MTL) is used to regularize the solution. Here, an external task of geometric context estimation is used to improve on the task of semantic segmentation. Experimental results on the MSRC21 VOC2007 datasets were shown.

四, 多种标记混合使用:image-tag, bounding box and scribbles:

1,Papandreou, George, et al. "Weakly-and semi-supervised learning of a DCNN for semantic image segmentation." arXiv preprint arXiv:1502.02734 (2015).

Abstract: Studied two problems (1) weakly annotated training data such as bounding boxes or image-level labels and (2) a combination of few strongly labeled and many weakly labeled images. EM methods were combined with the previously proposed DeepLab segmentation framework. Competitive results on PASCAL VOC 2012 were shown.

2,Xu, Jia, Alexander G. Schwing, and Raquel Urtasun. "Learning to segment under various forms of weak supervision." CVPR 2015.

Abstract: Proposed a unified approach that incorporates various forms of weak supervisions ( image level tags, bounding boxes, and partial labels) to produce a pixel-wise labeling. The task is formulated as a max-margin clustering framework, where knowledge from supervision is included via constraints, restricting the assignment of pixels to class labels. Experiments show that this method ourperforms the state-of-the-art 12% on per-class accuracy.

你可能感兴趣的:(弱监督学习文章列表)