GiovanniのCVPR2017之行

Author: Zongwei Zhou | 周纵苇
Weibo: @MrGiovanni
Email: [email protected]


CVPR官网信息:

CVPR录用论文集

CVPR 2017 open access

CVPR的流程

  • PDF: (link)
  • Word: (link)
  • At-a-Glance Summary: (link)

CVPR Workshop的流程

  • PDF: (link).
  • Word: (link).
  • At-a-Glance Summary: (link).

想合影的人列表...

  • Fei-Fei Li
GiovanniのCVPR2017之行_第1张图片
其他的照不照真的无所谓啦~~
  • Jia Li
  • Kai-ming He
  • Xiu-Shen Wei
  • Hu-chuan Lu
  • Pei-hua Li
  • Yi Sun
  • Hao Su
  • Pheng-Ann Heng
  • Lu Le

网上很有用的资源

[1] CVPR-2017-Abstracts-Collection
[2] CVPR 2017 论文解读集锦


我的发表情况

论文:Fine-tuning Convolutional Neural Networks for Biomedical Image Analysis: Actively and Incrementally
博客:Active Learning: 一个降低深度学习时间,空间,经济成本的解决方案
海报:Fine-tuning Convolutional Neural Networks for Biomedical Image Analysis:
Actively and Incrementally


CVPR2017有多牛

  • 2620 valid submissions
  • 783 papers
  • 215 long and short orals
  • 3 parallel tracks
  • 127 sponsors
  • 859k sponsorship fundings
  • 4950 registrations

关于这一堆顶级论文,我按照他们展示的日期顺序或者按照topic挑出一些我想要深入和作者交流的论文,策略是不求遍地开花,只求真正弄懂几篇和我的兴趣相关的论文即可。


Saturday, July 22

1- Deep Joint Rain Detection and Removal From a Single Image
相关:深度去雨--Deep Joint Rain Detection and Removal from a Single Image
除此之外,刘家瑛教授还介绍了她的「去雨」研究(Deep Joint Rain Detection and Removal from a Single Image)——基于多任务学习的方法对图像中的「雨线」和「雨雾」进行检测和去除,从而使图像的主题内容呈现的更加清晰。这项研究有着重要的实际意义,可应用于恶劣天气情况下的道路监控以及自动驾驶等领域。[学术盛宴:微软亚洲研究院CVPR 2017论文分享会全情回顾]
Sat, July 22, Afternoon, 1500–1700, Kamehameha I
备注:我个人觉得挺有意思的工作,可以用到ultrasound image的artificial噪音问题上!

根据作者的说法,Ground Truth实质上是模拟出来的,然后在实际的有雨的照片上面测试,具体怎么衡量好坏,居然是用眼睛看... 额,那还怎么玩。不针对这篇论文,而是去雨这个研究领域,我个人感觉问题有很多欠解决,倒也不是说算法,而是这个问题的定义,怎么能这样事儿的?

Thought: 看到很多different domain的问题,我想试试的是Quality Assessment在这上面。Domain Adaptation这个词好像经常一起出现,我以前从来没有接触过,感觉和Transfer Learning有点关系,对于Transfer Learning,我有很大的兴趣。

Correlational Gaussian Processes for Cross-Domain Visual Recognition
Chengjiang Long, Gang Hua
[pdf] [bibtex]

Joint Geometrical and Statistical Alignment for Visual Domain Adaptation
Jing Zhang, Wanqing Li, Philip Ogunbona
[pdf] [slides] [bibtex]

Deep Transfer Network: Unsupervised Domain Adaptation
Xu Zhang, Felix Xinnan Yu, Shih-Fu Chang, Shengjin Wang
笔记:Deep transfer network: unsupervised domain adaptation

Mind the Class Weight Bias: Weighted Maximum Mean Discrepancy for Unsupervised Domain Adaptation
Hongliang Yan, Yukang Ding, Peihua Li, Qilong Wang, Yong Xu, Wangmeng Zuo
[pdf] [slides] [bibtex]

Unsupervised Pixel-Level Domain Adaptation With Generative Adversarial Networks
Konstantinos Bousmalis, Nathan Silberman, David Dohan, Dumitru Erhan, Dilip Krishnan
[pdf] [Supp] [slides] [bibtex]

Learning an Invariant Hilbert Space for Domain Adaptation
Samitha Herath, Mehrtash Harandi, Fatih Porikli
[pdf] [Supp] [slides] [bibtex]

Domain Adaptation by Mixture of Alignments of Second- or Higher-Order Scatter Tensors
Piotr Koniusz, Yusuf Tas, Fatih Porikli
[pdf] [bibtex]

Deep Hashing Network for Unsupervised Domain Adaptation
Hemanth Venkateswara, Jose Eusebio, Shayok Chakraborty, Sethuraman Panchanathan
[pdf] [Supp] [slides] [bibtex]

A Compact DNN: Approaching GoogLeNet-Level Accuracy of Classification and Domain Adaptation
Chunpeng Wu, Wei Wen, Tariq Afzal, Yongmei Zhang, Yiran Chen, Hai (Helen) Li
[pdf] [slides] [bibtex]

Adversarial Discriminative Domain Adaptation
Eric Tzeng, Judy Hoffman, Kate Saenko, Trevor Darrell
[pdf] [slides] [bibtex]

【深度学习】论文导读:无监督域适应(Deep Transfer Network: Unsupervised Domain Adaptation)

一文读懂深度适配网络(DAN)

Transfer learning and
domain adaptation

Lower layers: more general features. Transfer very well to other tasks.
Higher layers: more task specific.

Y Ganin and V Lempitsky, Unsupervised Domain Adaptation by Backpropagation, ICML 2015

Thought: Multi-Task 共用一个头,支出很多尾巴,这样就不用为同一个数据集训练多个网络了。

Weakly Supervised Actor-Action Segmentation via Robust Multi-Task Ranking
Yan Yan, Chenliang Xu, Dawen Cai, Jason J. Corso
[pdf] [bibtex]

Fully-Adaptive Feature Sharing in Multi-Task Networks With Applications in Person Attribute Classification
Yongxi Lu, Abhishek Kumar, Shuangfei Zhai, Yu Cheng, Tara Javidi, Rogerio Feris
[pdf] [slides] [bibtex]

Deep Multitask Architecture for Integrated 2D and 3D Human Sensing
Alin-Ionut Popa, Mihai Zanfir, Cristian Sminchisescu
[pdf] [slides] [bibtex]

Thought: 热力图来辅助定位ROI
这个事情有很多研究者都曾和我提到过,即用一个分类的ground truth来训练一个网络,然后通过分析后面几层的热力图来辅助分割或者检测。根据他们的可视化,的确靠谱,我感觉它背后的理论支撑应该和multi-task一个道理。

Thought: 关于label的问题,肿瘤和非肿瘤,狗和非狗,benign,malignant,其他,实验设计还是蛮简单的,二分类器(猫和狗),三分类器(猫和狗和其他),然后分析两个分类器对于猫/狗的分类效果。不过我更愿意用理论来解释这个问题,实验的话可能说服力不够。

2- Borrowing Treasures From the Wealthy: Deep Transfer Learning Through Selective Joint Fine-Tuning
Sat, July 22, Morning, 0904, Kamehameha III
Thought: 喜欢这篇是因为最近我对于Fine-tune这个方法有一些疑惑,希望可以从作者的工作中找到解答。Fine-tune到底对于一个和ImageNet有很大差异的数据集,有多大的帮助,或者怎么样Fine-tune可以把迁移学习这个方法用的更好?

3- On Compressing Deep Models by Low Rank and Sparse Decomposition
Sat, July 22, Morning, 0928, Kamehameha III
备注:压缩存储永远是一个对我来说比较难的课题,这个技术在3D CNN上能起到很重要的作用。可能对于理论的要求会比较高,还有编程量。

4- Unsupervised Pixel-Level Domain Adaptation With Generative Adversarial Networks.
Sat, July 22, Morning, 1001, Kamehameha III
Thought: 用GAN来做类似迁移学习的事情,找相似的domain,然后可以直接用feature extractor。

5- From Red Wine to Red Tomato: Composition With Context.
Sat, July 22, Afternoon, 1417, Kamehameha III
备注:题目好有意思

6- Fully-Adaptive Feature Sharing in Multi-Task Networks With Applications in Person Attribute Classification.
Sat, July 22, Afternoon, 1354, Kamehameha III
备注:感觉是一个逐步生长的网络结构(Jae吃饭的时候说的那个),abstract写的很到位。

Q: RGB-D image: what's that?
A RGB-D image is simply a combination of a RGB image and its corresponding depth image. A depth image is an image channel in which each pixel relates to a distance between the image plane and the corresponding object in the RGB image.
[What is the difference between depth and RGB-depth images?](https://www.researchgate.net/post/What_is_the_difference_between_depth_and_RGB-depth_images [accessed Jul 21, 2017)

7- Diversified Texture Synthesis With Feed-Forward Networks
Sat, July 22, Morning, 0916, Kalākaua Ballroom C

8- Superpixel-Based Tracking-By-Segmentation Using Markov Chains
Sat, July 22, Morning, 1030–1230, Kamehameha I

9- Boundary-Aware Instance Segmentation
Sat, July 22, Morning, 1030–1230, Kamehameha I

10- Model-Based Iterative Restoration for Binary Document Image Compression With Dictionary Learning
Sat, July 22, Morning, 1030–1230, Kamehameha I

11- Learning by Association — A Versatile Semi-Supervised Training Method for Neural Networks
Sat, July 22, Morning, 1030–1230, Kamehameha I

12- DilatedResidualNetworks
Sat, July 22, Morning, 1030–1230, Kamehameha I

13- Split-BrainAutoencoders:UnsupervisedLearningby Cross-Channel Prediction
Sat, July 22, Morning, 1030–1230, Kamehameha I

14- The Incremental Multiresolution Matrix Factorization Algorithm
Sat, July 22, Morning, 1030–1230, Kamehameha I

15- Teaching Compositionality to CNNs
Sat, July 22, Morning, 1030–1230, Kamehameha I

16- Using Ranking-CNN for Age Estimation
Sat, July 22, Morning, 1030–1230, Kamehameha I

17- Accurate Single Stage Detector Using Recurrent Rolling Convolution
Sat, July 22, Morning, 1030–1230, Kamehameha I

18- A Compact DNN: Approaching GoogLeNet-Level Accuracy of Classification and Domain Adaptation
Sat, July 22, Morning, 1030–1230, Kamehameha I

19- The Impact of Typicality for Informative Representative Selection
Sat, July 22, Morning, 1030–1230, Kamehameha I

20- Infinite Variational Autoencoder for Semi-Supervised Learning
Sat, July 22, Morning, 1030–1230, Kamehameha I

21- VariationalBayesianMultipleInstanceLearningWith Gaussian Processes
Sat, July 22, Morning, 1030–1230, Kamehameha I

22- Non-UniformSubsetSelectionforActiveLearningin Structured Data
Sat, July 22, Morning, 1030–1230, Kamehameha I

23- Pixelwise Instance Segmentation With a Dynamically Instantiated Network
Sat, July 22, Morning, 1030–1230, Kamehameha I

24- Object Detection in Videos With Tubelet Proposal Networks
Sat, July 22, Morning, 1030–1230, Kamehameha I

25- Feature Pyramid Networks for Object Detection
Sat, July 22, Morning, 1030–1230, Kamehameha I

26- Mind the Class Weight Bias: Weighted Maximum Mean Discrepancy for Unsupervised Domain Adaptation
Sat, July 22, Morning, 1030–1230, Kamehameha I

27- Fine-Grained Recognition of Thousands of Object Categories With Single-Example Training
Sat, July 22, Morning, 1030–1230, Kamehameha I

28- Improving Interpretability of Deep Neural Networks With Semantic Information
Sat, July 22, Morning, 1030–1230, Kamehameha I

29- Fast Boosting Based Detection Using Scale Invariant Multimodal Multiresolution Filtered Features
Sat, July 22, Morning, 1030–1230, Kamehameha I

30- Temporal Convolutional Networks for Action Segmentation and Detection
Sat, July 22, Morning, 1030–1230, Kamehameha I

31- Weakly Supervised Actor-Action Segmentation via Robust Multi-Task Ranking
Sat, July 22, Morning, 1030–1230, Kamehameha I

32- Crossing Nets: Combining GANs and VAEs With a Shared Latent Space for Hand Pose Estimation
Sat, July 22, Afternoon, 1330, Kamehameha III

33- Finding Tiny Faces
Sat, July 22, Afternoon, 1500–1700, Kamehameha I

34- Simple Does It: Weakly Supervised Instance and Semantic Segmentation
Sat, July 22, Afternoon, 1500–1700, Kamehameha I

35- Anti-Glare: Tightly Constrained Optimization for Eyeglass Reflection Removal
Sat, July 22, Afternoon, 1500–1700, Kamehameha I

36- Deep Joint Rain Detection and Removal From a Single Image
Sat, July 22, Afternoon, 1500–1700, Kamehameha I

37- Removing Rain From Single Images via a Deep Detail Network
Sat, July 22, Afternoon, 1500–1700, Kamehameha I

38- Large Kernel Matters — Improve Semantic Segmentation by Global Convolutional Network
Sat, July 22, Afternoon, 1500–1700, Kamehameha I

39- Xception: Deep Learning With Depthwise Separable Convolutions
Sat, July 22, Afternoon, 1500–1700, Kamehameha I

40- Feedback Networks

41- Improving Pairwise Ranking for Multi-Label Image Classification

42- Stacked Generative Adversarial Networks

43- MoreIsLess:AMoreComplicatedNetworkWithLess Inference Complexity

44- CLEVR: A Diagnostic Dataset for Compositional Language and Elementary Visual Reasoning

45- Learning Spatial Regularization With Image-Level Supervisions for Multi-Label Image Classification

46- Predictive-Corrective Networks for Action Detection

47- Unified Embedding and Metric Learning for Zero-Exemplar Event Detection

48- Query-Focused Video Summarization: Dataset

Sunday, July 23

1- Zero-Shot Learning - the Good, the Bad and the Ugly
Sun, July 23, Morning, 1000–1200, Kamehameha I
Q: Zero-Shot: what's that?

2- Densely Connected Convolutional Networks
**Note: **和resnet比较的时候有没有花精力去fine resnet,还是一次到位作为baseline?下载代码,以后的论文里面肯定要用到。

3- Making Deep Neural Networks Robust to Label Noise: A Loss Correction Approach

Thought: 湿狗的那个工作可以用到Quality Assessment上面,在ImageNet上面训练一个模糊感知器,然后用到Colonoscopy上面。

4- Inverse Compositional Spatial Transformer Networks


Monday, July 24

1- Global optimalities in neural network training.

Thought: 关于Cross Validation, 我想做一个学习框架来去除cv,因为深度学习里面做cv很麻烦。
Evaluate the Performance Of Deep Learning Models in Keras
Preventing “Overfitting” of Cross-Validation data

We mostly have large datasets when it is not worth the trouble to do something like k-fold cross-validation. We just use a train/valid/test split. Cross-validation becomes useful when the dataset is tiny (like hundreds of examples), but then you can't typically learn a complex model. [Is cross-validation heavily used in deep learning or is it too expensive to be used?]

AFAIK, in deep learning you would normally tempt to avoid cross-validation because of the cost associated with training K different models. Instead of doing cross validation, you use a random subset of your training data as a hold-out for validation purposes.
For example, Keras deep learning library (which runs on top of theano or tensor flow), allows you to pass one of two parameters for the fit function (that performs training).
validation_split: what percentage of your training data should be held out for validation.
validation_data: a tuple of (X, y) to be used for validation. This parameter overrides the validation_split parameter value. [Is cross-validation heavily used in deep learning or is it too expensive to be used?]


Salient topic:

1- Instance-Level Salient Object Segmentation.
2- Deep Level Sets for Salient Object Detection.
3- Deeply Supervised Salient Object Detection With Short Connections.
4- What Is and What Is Not a Salient Object? Learning Salient Object Detector by Ensembling Linear Exemplar Regressors.
5- Learning to Detect Salient Objects With Image-Level Supervision.
6- Non-Local Deep Features for Salient Object Detection.


Ultrasound Image Artificial Issue:

1- Deep Multi-Scale Convolutional Neural Network for Dynamic Scene Deblurring.
备注:问题是他们用了监督学习,有blur图像和与之相对应的clear图像.
2- A Novel Tensor-Based Video Rain Streaks Removal Approach via Utilizing Discriminatively Intrinsic Priors.
3 -Deep Joint Rain Detection and Removal From a Single Image
4- Deep Video Deblurring for Hand-Held Cameras
Sat, July 22, Morning, 0904, Kalākaua Ballroom C


GAN

1- Unsupervised Pixel-Level Domain Adaptation With Generative Adversarial Networks
Sat, July 22, Morning, 1001, Kamehameha III
2- Photo-Realistic Single Image Super-Resolution Using a Generative Adversarial Network
Sat, July 22, Morning, 1015, Kamehameha III


技术改动

1- FC4: Fully Convolutional Color Constancy With Confidence-Weighted Pooling
Sat, July 22, Morning, 1015, Kalākaua Ballroom C

你可能感兴趣的:(GiovanniのCVPR2017之行)