Deep Image Harmonization
[https://arxiv.org/pdf/1702.08502.pdf](Unsupervised Image-to-Image Translation Networks), segmentation
Unsupervised Image-to-Image Translation Networks, Unsupervised image-to-image
Learning Chained Deep Features and Classifiers for Cascade in Object Detection, Xiaogang
ViP-CNN: A Visual Phrase Reasoning Convolutional Neural Network for Visual
Relationship Detection, Xiaogang
Learning Deep Features via Congenerous Cosine Loss for Person Recognition, Yu Liu, Xiaogang Wang
TRANSFERRING FACE VERIFICATION NETS TO PAIN AND EXPRESSION REGRESSION, Alan Yuille
Unsupervised Diverse Colorization via Generative Adversarial Networks, Colorization
Revisiting Deep Image Smoothing and Intrinsic Image Decomposition, Qingnan Fan, Baoquan Chen
Deep Feature Interpolation for Image Content Changes
manipulate
https://arxiv.org/pdf/1612.06890.pdf Justin
https://arxiv.org/pdf/1612.07182.pdf NLP
https://arxiv.org/pdf/1612.06933.pdf unsupervised place discovery for visual place classification
https://arxiv.org/pdf/1612.07086.pdf Recurrent Highway Networks with Language CNN for Image Captioning
https://arxiv.org/pdf/1612.07217.pdf Learning Motion Patterns in Videos
https://arxiv.org/pdf/1612.07310.pdf Beyond Holistic Object Recognition:
Enriching Image Understanding with Part States
https://arxiv.org/pdf/1612.06851.pdf beyond skip connections (5)
https://arxiv.org/pdf/1612.06573.pdf detecting unexpected obstacles
https://arxiv.org/pdf/1612.06558.pdf semantic segmentation
https://arxiv.org/pdf/1612.06530.pdf grounded visual questions
https://arxiv.org/pdf/1612.06524.pdf 3d human pose estimation
https://arxiv.org/pdf/1612.06371.pdf action recognition (5)
https://arxiv.org/pdf/1612.06321.pdf image retrieval (5)
https://arxiv.org/pdf/1612.06152.pdf few-shot object recognition (5)
https://arxiv.org/pdf/1612.06053.pdf visual tracking
https://arxiv.org/pdf/1612.05877.pdf action recognition
https://arxiv.org/pdf/1612.05872.pdf 3d shape
https://arxiv.org/pdf/1612.05836.pdf egoTransfer
https://arxiv.org/pdf/1612.05753.pdf qlearning
https://arxiv.org/pdf/1612.05363.pdf learning Residual Images for face attribute manipulation (5)
https://arxiv.org/pdf/1612.05322.pdf face detection
https://arxiv.org/pdf/1612.05478.pdf video propagation networks
https://arxiv.org/pdf/1612.05424.pdf unsupervised pixel-level domain adaptation with GAN (5)
https://arxiv.org/pdf/1612.05400.pdf deep residual hashing
https://arxiv.org/pdf/1612.05386.pdf vqa-machine
https://arxiv.org/pdf/1612.05086.pdf Coupling Adaptive Batch Sizes with Learning Rates
https://arxiv.org/pdf/1612.04844.pdf The more you know
https://arxiv.org/pdf/1612.04884.pdf action recognition
https://arxiv.org/pdf/1612.04901.pdf zero-shot learning, CMU
https://arxiv.org/pdf/1612.04904.pdf regressing robust model
https://arxiv.org/pdf/1612.04949.pdf Recurrent Image
https://arxiv.org/pdf/1612.05079.pdf SceneNet
https://arxiv.org/pdf/1612.05234.pdf Visual Compiler
https://arxiv.org/pdf/1612.04357.pdf stacked GAN
https://arxiv.org/pdf/1612.04337.pdf fast style transfer
https://arxiv.org/pdf/1612.04229.pdf recurrent generative model
https://arxiv.org/pdf/1612.03928.pdf more attention
https://arxiv.org/pdf/1611.09969v1.pdf
https://arxiv.org/pdf/1612.00496v1.pdf
https://arxiv.org/pdf/1507.02379.pdf
https://arxiv.org/pdf/1611.08402.pdf
https://arxiv.org/pdf/1611.08303.pdf
https://arxiv.org/pdf/1611.08408.pdf
https://arxiv.org/pdf/1511.07125v1.pdf
https://arxiv.org/pdf/1611.08583.pdf
https://arxiv.org/pdf/1611.08986.pdf
https://arxiv.org/pdf/1611.09078.pdf
https://arxiv.org/pdf/1611.09325.pdf
https://arxiv.org/pdf/1611.09326.pdf
https://arxiv.org/pdf/1611.08588.pdf
https://arxiv.org/pdf/1612.03809.pdf
https://arxiv.org/pdf/1612.03236.pdf
https://arxiv.org/pdf/1612.03242.pdf
https://arxiv.org/pdf/1612.03268.pdf
https://arxiv.org/pdf/1612.03365.pdf
https://arxiv.org/pdf/1612.03550.pdf
https://arxiv.org/pdf/1612.03557.pdf
https://arxiv.org/pdf/1612.03628.pdf
https://arxiv.org/pdf/1612.03630.pdf
https://arxiv.org/pdf/1612.03663.pdf
https://arxiv.org/pdf/1612.03897.pdf
https://arxiv.org/pdf/1612.03144.pdf
https://arxiv.org/pdf/1612.03052.pdf
https://arxiv.org/pdf/1612.03129.pdf
https://arxiv.org/pdf/1612.02372.pdf
https://arxiv.org/pdf/1612.02297.pdf
https://arxiv.org/pdf/1612.02287.pdf
https://arxiv.org/pdf/1612.02177.pdf
https://arxiv.org/pdf/1612.01635.pdf
https://arxiv.org/pdf/1612.01895.pdf
https://arxiv.org/pdf/1612.01958.pdf
https://arxiv.org/pdf/1612.01981.pdf
https://arxiv.org/pdf/1612.01991.pdf
https://arxiv.org/pdf/1612.01887.pdf
https://arxiv.org/pdf/1612.01202.pdf
https://arxiv.org/pdf/1612.01465.pdf
https://arxiv.org/pdf/1612.01380.pdf
https://arxiv.org/pdf/1612.01079.pdf
https://arxiv.org/pdf/1612.01057.pdf
https://arxiv.org/pdf/1612.01051.pdf
https://arxiv.org/pdf/1612.00991.pdf
https://arxiv.org/pdf/1612.00901.pdf
https://arxiv.org/pdf/1612.01230.pdf
https://arxiv.org/pdf/1612.01294.pdf
https://arxiv.org/pdf/1612.01452.pdf
https://arxiv.org/pdf/1612.01479.pdf