作者:chen_h
微信号 & QQ:862251340
微信公众号:coderpai
简介:
Another TensorFlow implementation of Learning from Simulated and Unsupervised Images through Adversarial Training.
Thanks to TaeHoon Kim, I was able to run simGAN that generates refined synthetic eye dataset.This is just another version of his code that can generate NYU hand datasets.
The structure of the refiner/discriminator networks are changed as it is described in the Apple paper.The only code added in this version is ./data/hand_data.py.Rest of the code runs in the same way as the original version.To set up the environment(or to run UnityEyes dataset), please follow the instructions in this link.
原文链接:https://github.com/shinseung428/simGAN_NYU_Hand
2.【视频】NIPS 2016 Workshop on Adversarial Training
简介:
The high quality videos of the NIPS 2016 workshop on adversarial training are now available
原文链接:https://www.youtube.com/playlist?list=PLJscN9YDD1buxCitmej1pjJkR5PMhenTF
3.【代码】High-Resolution Image Inpainting using Multi-Scale Neural Patch Synthesis
简介:
This is the code for High-Resolution Image Inpainting using Multi-Scale Neural Patch Synthesis. Given an image, we use the content and texture network to jointly infer the missing region. This repository contains the pre-trained model for the content network and the joint optimization code, including the demo to run example images. The code is adapted from the Context Encoders and CNNMRF. Please contact Harry Yang for questions regarding the paper or the code. Note that the code is for research purpose only.
原文链接:https://github.com/leehomyc/High-Res-Neural-Inpainting
4.【博客】Comprehensive tutorial — deep learning to diagnose skin cancer with the accuracy of a dermatologist
简介:
Waya.ai recently open sourced the core components of its skin cancer diagnostic software and made its data sets publicly available. The objective of this effort is to release a free and open source product in early May that has been validated to diagnose skin cancer with dermatologist-level accuracy or better.
原文链接:https://medium.com/waya-ai/ground-up-hands-on-deep-learning-tutorial-diagnosing-skin-cancer-w-dermatologist-level-61a90fe9f269#.wiiz0vqwj
5.【论文 & 代码】
简介:
Deep residual networks were shown to be able to scale up to thousands of layers and still have improving performance. However, each fraction of a percent of improved accuracy costs nearly doubling the number of layers, and so training very deep residual networks has a problem of diminishing feature reuse, which makes these networks very slow to train. To tackle these problems, in this paper we conduct a detailed experimental study on the architecture of ResNet blocks, based on which we propose a novel architecture where we decrease depth and increase width of residual networks. We call the resulting network structures wide residual networks (WRNs) and show that these are far superior over their commonly used thin and very deep counterparts. For example, we demonstrate that even a simple 16-layer-deep wide residual network outperforms in accuracy and efficiency all previous deep residual networks, including thousand-layerdeep networks, achieving new state-of-the-art results on CIFAR, SVHN, COCO, and significant improvements on ImageNet.
原文链接:https://arxiv.org/pdf/1605.07146.pdf
代码链接:https://github.com/szagoruyko/wide-residual-networks