[1] A Classification–Based Study of Covariate Shift in GAN Distributions
Shibani Santurkar, Ludwig Schmidt, Aleksander M ˛adry
Massachusetts Institute of Technology
http://proceedings.mlr.press/v80/santurkar18a/santurkar18a.pdf
本文的亮点在于
大致过程如下
几种方法的效果对比如下
[2] Contextual Graph Markov Model: A Deep and Generative Approach to Graph Processing
Davide Bacciu, Federico Errica, Alessio Micheli
University of Pisa
http://proceedings.mlr.press/v80/bacciu18a/bacciu18a.pdf
图示例如下
CGMM图示如下
CGMM训练概览示例如下
数据集统计信息如下
几种模型的效果对比如下
下面是数据集统计信息
几种方法的效果对比如下
层数对CGMM的影响示例如下
代码地址
https://github.com/diningphil/CGMM
[3] Deep One-Class Classification
Lukas Ruff, Robert A. Vandermeulen, Nico Gornitz, Lucas Deecke, Shoaib A. Siddiqui, Alexander Binder, Emmanuel Muller, Marius Kloft
Hasso Plattner Institute, TU Kaiserslautern, TU Berlin, University of Edinburgh, Edinburgh, German Research Center for Artificial Intelligence (DFKI GmbH), Singapore University of Technology and Design
http://proceedings.mlr.press/v80/ruff18a/ruff18a.pdf
网络结构示例如下
几种方法的效果对比如下
代码地址
https://github.com/lukasruff/Deep-SVDD
[4] Not All Samples Are Created Equal: Deep Learning with Importance Sampling
Angelos Katharopoulos, Franc¸ois Fleuret
Idiap Research Institute, EPFL
http://proceedings.mlr.press/v80/katharopoulos18a/katharopoulos18a.pdf
算法伪代码如下
代码地址
https://github.com/idiap/importance-sampling
[5] Dynamical Isometry and a Mean Field Theory of CNNs: How to Train 10,000-Layer Vanilla Convolutional Neural Networks
Lechao Xiao, Yasaman Bahri, Jascha Sohl-Dickstein, Samuel S. Schoenholz, Jeffrey Pennington
http://proceedings.mlr.press/v80/xiao18a/xiao18a.pdf
不同层数的准确率对比如下
二维正交核算法伪代码如下
[6] A Semantic Loss Function for Deep Learning with Symbolic Knowledge
Jingyi Xu, Zilu Zhang, Tal Friedman, Yitao Liang, Guy Van den Broeck
University of California Los Angeles, Peking University
http://proceedings.mlr.press/v80/xu18h/xu18h.pdf
网络结构示例如下
几种方法的效果对比如下
代码地址
https://github.com/UCLA-StarAI/Semantic-Loss
您可能感兴趣