1.1.Adversarial Examples Are Not Bugs, They Are Features
1.2.Metric Learning for Adversarial Robustness
1.3.Adversarial Self-Defense for Cycle-Consistent GANs
1.4.Model Compression with Adversarial Robustness: A Unified Optimization Framework
1.5.A New Defense Against Adversarial Images: Turning a Weakness into a Strength
1.6.Defense Against Adversarial Attacks Using Feature Scattering-based Adversarial Training
1.7.Fooling Neural Network Interpretations via Adversarial Model Manipulation
1.8.Adversarial Training and Robustness for Multiple Perturbations
1.9.Lower Bounds on Adversarial Robustness from Optimal Transport
1.10.Error Correcting Output Codes Improve Probability Estimation and Adversarial Robustness of Deep Neural Networks
1.11.Certified Adversarial Robustness with Additive Gaussian Noise
1.12.Functional Adversarial Attacks
1.13.Cross-Modal Learning with Adversarial Samples
1.14.Improving Black-box Adversarial Attacks with a Transfer-based Prior
1.15.Unlabeled Data Improves Adversarial Robustness
1.16.Provably Robust Deep Learning via Adversarially Trained Smoothed Classifiers
1.17.Theoretical evidence for adversarial robustness through randomization
1.18.Learning to Confuse: Generating Training Time Adversarial Data with Auto-Encoder
1.19.Are Labels Required for Improving Adversarial Robustness?
1.20.Provably robust boosted decision stumps and trees against adversarial attacks
1.21.On Robustness to Adversarial Examples and Polynomial Optimization
1.22.Adversarial Robustness through Local Linearization
1.23.Provable Certificates for Adversarial Examples: Fitting a Ball in the Union of Polytopes
1.24.Reverse KL-Divergence Training of Prior Networks: Improved Uncertainty and Adversarial Robustness
1.25.On Relating Explanations and Adversarial Examples
二、NeurIPS2019 paper分类-Meta-Learning
2.1.Multimodal Model-Agnostic Meta-Learning via Task-Aware Modulation
2.2.Meta-Learning with Implicit Gradients
2.3.Learning to Propagate for Graph Meta-Learning
2.4.Efficient Meta Learning via Minibatch Proximal Update
2.5.Self-Supervised Generalisation with Meta Auxiliary Learning
2.6.Meta-Learning Representations for Continual Learning
2.7.Adaptive Gradient-Based Meta-Learning Methods
2.8.SMILe: Scalable Meta Inverse Reinforcement Learning through Context-Conditional Policies
2.9.Reconciling meta-learning and continual learning with online mixtures of tasks
2.10.Guided Meta-Policy Search
2.11.Systematic generalization through meta sequence-to-sequence learning
2.12.Meta Learning with Relational Information for Short Sequences
2.13.Unsupervised Meta Learning for Few-Show Image Classification
2.14.Unsupervised Curricula for Visual Meta-Reinforcement Learning
2.15.Meta-Inverse Reinforcement Learning with Probabilistic Context Variables
2.16.Neural Relational Inference with Fast Modular Meta-learning
2.17.MetaInit: Initializing learning by learning to initialize
2.18.Online-Within-Online Meta-Learning
2.19.Metalearned Neural Memory
【笔记获取】
【论文资源获取】
一、NeurIPS2019 paper分类-对抗样本
1.1.Adversarial Examples Are Not Bugs, They Are Features
本文尝试从对抗样本存在的原因的角度出发进行探索,认为对抗样本是因为non-robust feature的存在而造成的。这类特征是从数据的模式中学得的、具有高度可预测性质,然而却十分脆弱且对于人类来说难以理解的。
LINK
https://arxiv.org/pdf/1905.02175.pdf
ABSTRACT Adversarial examples have attracted significant attention in machine learning, but the reasons for their existence and pervasiveness remain unclear. We demonstrate that adversarial examples can be directly attributed to the presence of non-robust features: features derived from patterns in the data distribution that are highly predictive, yet brittle and incomprehensible to humans. After capturing these features within a theoretical framework, we establish their widespread existence in standard datasets. Finally, we present a simple setting where we can rigorously tie the phenomena we observe in practice to a misalignment between the (human-specified) notion of robustness and the inherent geometry of the data. |
1.2.Metric Learning for Adversarial Robustness
本文从度量学习的角度出发来对攻击下的表征空间正则化以达到使得分类器更加鲁棒的目的。
LINK
https://arxiv.org/pdf/1909.00900.pdf
ABSTRACT Deep networks are well-known to be fragile to adversarial attacks. Using several standard image datasets and established attack mechanisms, we conduct an empirical analysis of deep representations under attack, and find that the attack causes the internal representation to shift closer to the “false” class. Motivated by this observation, we propose to regularize the representation space under attack with metric learning in order to produce more robust classifiers. By carefully sampling examples for metric learning, our learned representation not only increases robustness, but also can detect previously unseen adversarial samples. Quantitative experiments show improvement of robustness accuracy by up to 4% and detection efficiency by up to 6% according to Area Under Curve (AUC) score over baselines. |
1.3.Adversarial Self-Defense for Cycle-Consistent GANs
本文探究了无监督映射方法中的自攻击行为是如何影响它们的性能的,并且提供了两种防御方法。
LINK
https://arxiv.gg363.site/pdf/1908.01517.pdf
ABSTRACT The goal of unsupervised image-to-image translation is to map images from one domain to another without the ground truth correspondence between the two domains. State-of-art methods learn the correspondence using large numbers of unpaired examples from both domains and are based on generative adversarial networks. In order to preserve the semantics of the input image, the adversarial objective is usually combined with a cycle-consistency loss that penalizes incorrect reconstruction of the input image from the translated one. However, if the target mapping is many-to-one, e.g. aerial photos to maps, such a restriction forces the generator to hide information in low-amplitude structured noise that is undetectable by human eye or by the discriminator. In this paper, we show how such self-attacking behavior of unsupervised translation methods affects their performance and provide two defense techniques. We perform a quantitative evaluation of the proposed techniques and show that making the translation model more robust to the self-adversarial attack increases its generation quality and reconstruction reliability and makes the model less sensitive to low-amplitude perturbations. |
1.4.Model Compression with Adversarial Robustness: A Unified Optimization Framework
[Shupeng Gui (University of Rochester) · Haotao N Wang (Texas A&M University) · Haichuan Yang (University of Rochester) · Chen Yu (University of Rochester) · Zhangyang Wang (TAMU) · Ji Liu (University of Rochester, Tencent AI lab)]
1.5.A New Defense Against Adversarial Images: Turning a Weakness into a Strength
[Shengyuan Hu (Cornell University) · Tao Yu (Cornell University) · Chuan Guo (Cornell University) · Wei-Lun Chao (Cornell University Ohio State University (OSU)) · Kilian Weinberger (Cornell University)]
1.6.Defense Against Adversarial Attacks Using Feature Scattering-based Adversarial Training
本文介绍了一种基于feature scattering的方法来提升模型鲁棒性。
LINK
https://arxiv.gg363.site/pdf/1907.10764.pdf
ABSTRACT We introduce a feature scattering-based adversarial training approach for improving model robustness against adversarial attacks. Conventional adversarial training approaches leverage a supervised scheme (either targeted or non-targeted) in generating attacks for training, which typically suffer from issues such as label leaking as noted in recent works. Differently, the proposed approach generates adversarial images for training through feature scattering in the latent space, which is unsupervised in nature and avoids label leaking. More importantly, this new approach generates perturbed images in a collaborative fashion, taking the inter-sample relationships into consideration. We conduct analysis on model robustness and demonstrate the effectiveness of the proposed approach through extensively experiments on different datasets compared with state-of-the-art approaches. |
1.7.Fooling Neural Network Interpretations via Adversarial Model Manipulation
本文提供了两种愚弄方式,并发现对于当前SOTA的解释器,比如:LRP,Grad-CAM等都可以轻易地被愚弄。
LINK
https://arxiv.gg363.site/pdf/1902.02041.pdf
ABSTRACT We ask whether the neural network interpretation methods can be fooled via adversarial model manipulation, which is defined as a model fine-tuning step that aims to radically alter the explanations without hurting the accuracy of the original models, e.g., VGG19, ResNet50, and DenseNet121. By incorporating the interpretation results directly in the penalty term of the objective function for fine-tuning, we show that the state-of-the-art saliency map based interpreters, e.g., LRP, Grad-CAM, and SimpleGrad, can be easily fooled with our model manipulation. We propose two types of fooling, Passive and Active, and demonstrate such foolings generalize well to the entire validation set as well as transfer to other interpretation methods. Our results are validated by both visually showing the fooled explanations and reporting quantitative metrics that measure the deviations from the original explanations. We claim that the stability of neural network interpretation method with respect to our adversarial model manipulation is an important criterion to check for developing robust and reliable neural network interpretation method. |
1.8.Adversarial Training and Robustness for Multiple Perturbations
对抗防御通常只针对单一类型的扰动有效。本文尝试设计一种针对多种类型的扰动都有效的防御方法。
LINK
https://arxiv.gg363.site/pdf/1904.13000.pdf
ABSTRACT Defenses against adversarial examples, such as adversarial training, are typically tailored to a single perturbation type (e.g., small ℓ∞-noise). For other perturbations, these defenses offer no guarantees and, at times, even increase the model’s vulnerability. Our aim is to understand the reasons underlying this robustness trade-off, and to train models that are simultaneously robust to multiple perturbation types. We prove that a trade-off in robustness to different types of ℓp-bounded and spatial perturbations must exist in a natural and simple statistical setting. We corroborate our formal analysis by demonstrating similar robustness trade-offs on MNIST and CIFAR10. Building upon new multi-perturbation adversarial training schemes, and a novel efficient attack for finding ℓ1-bounded adversarial examples, we show that no model trained against multiple attacks achieves robustness competitive with that of models trained on each attack individually. In particular, we uncover a pernicious gradient-masking phenomenon on MNIST, which causes adversarial training with first-order ℓ∞,ℓ1 and ℓ2 adversaries to achieve merely 50% accuracy. Our results question the viability and computational scalability of extending adversarial robustness, and adversarial training, to multiple perturbation types. |
1.9.Lower Bounds on Adversarial Robustness from Optimal Transport
Arjun Nitin Bhagoji (Princeton University) · Daniel Cullina (Princeton University) · Prateek Mittal (Princeton University)
1.10.Error Correcting Output Codes Improve Probability Estimation and Adversarial Robustness of Deep Neural Networks
Gunjan Verma (ARL) · Ananthram Swami (Army Research Laboratory, Adelphi)
1.11.Certified Adversarial Robustness with Additive Gaussian Noise
LINK
https://arxiv.gg363.site/pdf/1809.03113.pdf
ABSTRACT The existence of adversarial data examples has drawn significant attention in the deep-learning community; such data are seemingly minimally perturbed relative to the original data, but lead to very different outputs from a deep-learning algorithm. Although a significant body of work on developing defense models has been developed, most such models are heuristic and are often vulnerable to adaptive attacks. Defensive methods that provide theoretical robustness guarantees have been studied intensively, yet most fail to obtain non-trivial robustness when a large-scale model and data are present. To address these limitations, we introduce a framework that is scalable and provides certified bounds on the norm of the input manipulation for constructing adversarial examples. We establish a connection between robustness against adversarial perturbation and additive random noise, and propose a training strategy that can significantly improve the certified bounds. Our evaluation on MNIST, CIFAR-10 and ImageNet suggests that our method is scalable to complicated models and large data sets, while providing competitive robustness to state-of-the-art provable defense methods. |
1.12.Functional Adversarial Attacks
[Cassidy Laidlaw (University of Maryland) · Soheil Feizi (University of Maryland, College Park)]
1.13.Cross-Modal Learning with Adversarial Samples
CHAO LI (Xidian University) · Shangqian Gao (University of Pittsburgh) · Cheng Deng (Xidian University) · De Xie (XiDian University) · Wei Liu (Tencent AI Lab)
1.14.Improving Black-box Adversarial Attacks with a Transfer-based Prior
LINK
https://arxiv.gg363.site/pdf/1906.06919.pdf
ABSTRACT We consider the black-box adversarial setting, where the adversary has to generate adversarial perturbations without access to the target models to compute gradients. Previous methods tried to approximate the gradient either by using a transfer gradient of a surrogate white-box model, or based on the query feedback. However, these methods often suffer from low attack success rates or poor query efficiency since it is non-trivial to estimate the gradient in a high-dimensional space with limited information. To address these problems, we propose a prior-guided random gradient-free (P-RGF) method to improve black-box adversarial attacks, which takes the advantage of a transfer-based prior and the query information simultaneously. The transfer-based prior given by the gradient of a surrogate model is appropriately integrated into our algorithm by an optimal coefficient derived by a theoretical analysis. Extensive experiments demonstrate that our method requires much fewer queries to attack black-box models with higher success rates compared with the alternative state-of-the-art methods. |
1.15.Unlabeled Data Improves Adversarial Robustness
本文阐述半监督学习可以显著提升对抗的鲁棒性。
LINK
https://arxiv.gg363.site/pdf/1905.13736.pdf
ABSTRACT We demonstrate, theoretically and empirically, that adversarial robustness can significantly benefit from semisupervised learning. Theoretically, we revisit the simple Gaussian model of Schmidt et al. that shows a sample complexity gap between standard and robust classification. We prove that this gap does not pertain to labels: a simple semisupervised learning procedure (self-training) achieves robust accuracy using the same number of labels required for standard accuracy. Empirically, we augment CIFAR-10 with 500K unlabeled images sourced from 80 Million Tiny Images and use robust self-training to outperform state-of-the-art robust accuracies by over 5 points in (i) ℓ∞ robustness against several strong attacks via adversarial training and (ii) certified ℓ2 and ℓ∞ robustness via randomized smoothing. On SVHN, adding the dataset’s own extra training set with the labels removed provides gains of 4 to 10 points, within 1 point of the gain from using the extra labels as well. |
1.16.Provably Robust Deep Learning via Adversarially Trained Smoothed Classifiers
随机平滑在最近的工作中被用来作为提高模型鲁棒性的工具卓有成效。本文采用对抗训练来提高随机平滑的性能。
Hadi Salman (Microsoft Research AI) · Jerry Li (Microsoft) · Ilya Razenshteyn (Microsoft Research) · Pengchuan Zhang (Microsoft Research) · Huan Zhang (Microsoft Research AI) · Sebastien Bubeck (Microsoft Research) · Greg Yang (Microsoft Research)
LINK
https://arxiv.gg363.site/pdf/1906.04584.pdf
ABSTRACT Recent works have shown the effectiveness of randomized smoothing as a scalable technique for building neural network-based classifiers that are provably robust to ℓ2-norm adversarial perturbations. In this paper, we employ adversarial training to improve the performance of randomized smoothing. We design an adapted attack for smoothed classifiers, and we show how this attack can be used in an adversarial training setting to boost the provable robustness of smoothed classifiers. We demonstrate through extensive experimentation that our method consistently outperforms all existing provably ℓ2-robust classifiers by a significant margin on ImageNet and CIFAR-10, establishing the state-of-the-art for provable ℓ2-defenses. Our code and trained models are available at this http URL (https://github.com/Hadisalman/smoothing-adversarial). |
1.17.Theoretical evidence for adversarial robustness through randomization
Rafael Pinot (Dauphine University - CEA LIST Institute) · Laurent Meunier (Dauphine University - FAIR Paris) · Alexandre Araujo (Université Paris-Dauphine - Wavestone) · Hisashi Kashima (Kyoto University/RIKEN Center for AIP) · Florian Yger (Université Paris-Dauphine) · Cedric Gouy-Pailler (CEA) · Jamal Atif (Université Paris-Dauphine)
LINK
https://arxiv.gg363.site/pdf/1902.01148.pdf
ABSTRACT This paper investigates the theory of robustness against adversarial attacks. It focuses on the family of randomization techniques that consist in injecting noise in the network at inference time. These techniques have proven effective in many contexts, but lack theoretical arguments. We close this gap by presenting a theoretical analysis of these approaches, hence explaining why they perform well in practice. More precisely, we make two new contributions. The first one relates the randomization rate to robustness to adversarial attacks. This result applies for the general family of exponential distributions, and thus extends and unifies the previous approaches. The second contribution consists in devising a new upper bound on the adversarial generalization gap of randomized neural networks. We support our theoretical claims with a set of experiments |
1.18.Learning to Confuse: Generating Training Time Adversarial Data with Auto-Encoder
Ji Feng (Sinovation Ventures) · Qi-Zhi Cai (sinovation ventures) · Zhi-Hua Zhou (Nanjing University)
LINK
https://arxiv.gg363.site/pdf/1905.09027.pdf
ABSTRACT In this work, we consider one challenging training time attack by modifying training data with bounded perturbation, hoping to manipulate the behavior (both targeted or non-targeted) of any corresponding trained classifier during test time when facing clean samples. To achieve this, we proposed to use an auto-encoder-like network to generate the pertubation on the training data paired with one differentiable system acting as the imaginary victim classifier. The perturbation generator will learn to update its weights by watching the training procedure of the imaginary classifier in order to produce the most harmful and imperceivable noise which in turn will lead the lowest generalization power for the victim classifier. This can be formulated into a non-linear equality constrained optimization problem. Unlike GANs, solving such problem is computationally challenging, we then proposed a simple yet effective procedure to decouple the alternating updates for the two networks for stability. The method proposed in this paper can be easily extended to the label specific setting where the attacker can manipulate the predictions of the victim classifiers according to some predefined rules rather than only making wrong predictions. Experiments on various datasets including CIFAR-10 and a reduced version of ImageNet confirmed the effectiveness of the proposed method and empirical results showed that, such bounded perturbation have good transferability regardless of which classifier the victim is actually using on image data. |
1.19.Are Labels Required for Improving Adversarial Robustness?
一个有趣的事实是:训练一个鲁棒的网络往往比训练一个标准的网络消耗的数据更多。本文发现无标签数据可以替代标签数据来训练对抗鲁棒的模型。
Jean-Baptiste Alayrac (Deepmind) · Jonathan Uesato (DeepMind) · Po-Sen Huang (DeepMind) · Alhussein Fawzi (DeepMind) · Robert Stanforth (DeepMind) · Pushmeet Kohli (DeepMind)
LINK
https://arxiv.gg363.site/pdf/1905.13725.pdf
ABSTRACT Recent work has uncovered the interesting (and somewhat surprising) finding that training models to be invariant to adversarial perturbations requires substantially larger datasets than those required for standard classification. This result is a key hurdle in the deployment of robust machine learning models in many real world applications where labeled data is expensive. Our main insight is that unlabeled data can be a competitive alternative to labeled data for training adversarially robust models. Theoretically, we show that in a simple statistical setting, the sample complexity for learning an adversarially robust model from unlabeled data matches the fully supervised case up to constant factors. On standard datasets like CIFAR-10, a simple Unsupervised Adversarial Training (UAT) approach using unlabeled data improves robust accuracy by 21.7% over using 4K supervised examples alone, and captures over 95% of the improvement from the same number of labeled examples. Finally, we report an improvement of 4% over the previous state-of-the-art on CIFAR-10 against the strongest known attack by using additional unlabeled data from the uncurated 80 Million Tiny Images dataset. This demonstrates that our finding extends as well to the more realistic case where unlabeled data is also uncurated, therefore opening a new avenue for improving adversarial training. |
1.20.Provably robust boosted decision stumps and trees against adversarial attacks
Maksym Andriushchenko (University of Tübingen / EPFL) · Matthias Hein (University of Tübingen)
LINK
https://arxiv.gg363.site/pdf/1906.03526.pdf
ABSTRACT The problem of adversarial samples has been studied extensively for neural networks. However, for boosting, in particular boosted decision trees and decision stumps there are almost no results, even though boosted decision trees, as e.g. XGBoost, are quite popular due to their interpretability and good prediction performance. We show in this paper that for boosted decision stumps the exact min-max optimal robust loss and test error for an l∞-attack can be computed in O(nTlogT), where T is the number of decision stumps and n the number of data points, as well as an optimal update of the ensemble in O(n2TlogT). While not exact, we show how to optimize an upper bound on the robust loss for boosted trees. Up to our knowledge, these are the first algorithms directly optimizing provable robustness guarantees in the area of boosting. We make the code of all our experiments publicly available at this https URL(https://github.com/max-andr/provably-robust-boosting) |
1.21.On Robustness to Adversarial Examples and Polynomial Optimization
Pranjal Awasthi (Rutgers University/Google) · Abhratanu Dutta (Northwestern University) · Aravindan Vijayaraghavan (Northwestern University)
1.22.Adversarial Robustness through Local Linearization
Chongli Qin (DeepMind) · James Martens (DeepMind) · Sven Gowal (DeepMind) · Dilip Krishnan (Google) · Krishnamurthy Dvijotham (DeepMind) · Alhussein Fawzi (DeepMind) · Soham De (DeepMind) · Robert Stanforth (DeepMind) · Pushmeet Kohli (DeepMind)
LINK
https://arxiv.gg363.site/pdf/1907.02610.pdf
ABSTRACT Adversarial training is an effective methodology for training deep neural networks that are robust against adversarial, norm-bounded perturbations. However, the computational cost of adversarial training grows prohibitively as the size of the model and number of input dimensions increase. Further, training against less expensive and therefore weaker adversaries produces models that are robust against weak attacks but break down under attacks that are stronger. This is often attributed to the phenomenon of gradient obfuscation; such models have a highly non-linear loss surface in the vicinity of training examples, making it hard for gradient-based attacks to succeed even though adversarial examples still exist. In this work, we introduce a novel regularizer that encourages the loss to behave linearly in the vicinity of the training data, thereby penalizing gradient obfuscation while encouraging robustness. We show via extensive experiments on CIFAR-10 and ImageNet, that models trained with our regularizer avoid gradient obfuscation and can be trained significantly faster than adversarial training. Using this regularizer, we exceed current state of the art and achieve 47% adversarial accuracy for ImageNet with l-infinity adversarial perturbations of radius 4/255 under an untargeted, strong, white-box attack. Additionally, we match state of the art results for CIFAR-10 at 8/255. |
1.23.Provable Certificates for Adversarial Examples: Fitting a Ball in the Union of Polytopes
Matt Jordan (UT Austin) · justin lewis (University of Texas at Austin) · Alexandros Dimakis (University of Texas, Austin)
LINK
https://arxiv.gg363.site/pdf/1903.08778.pdf
ABSTRACT We propose a novel method for computing exact pointwise robustness of deep neural networks for all convex ℓp norms. Our algorithm, GeoCert, finds the largest ℓp ball centered at an input point x0, within which the output class of a given neural network with ReLU nonlinearities remains unchanged. We relate the problem of computing pointwise robustness of these networks to that of computing the maximum norm ball with a fixed center that can be contained in a non-convex polytope. This is a challenging problem in general, however we show that there exists an efficient algorithm to compute this for polyhedral complices. Further we show that piecewise linear neural networks partition the input space into a polyhedral complex. Our algorithm has the ability to almost immediately output a nontrivial lower bound to the pointwise robustness which is iteratively improved until it ultimately becomes tight. We empirically show that our approach generates distance lower bounds that are tighter compared to prior work, under moderate time constraints. |
1.24.Reverse KL-Divergence Training of Prior Networks: Improved Uncertainty and Adversarial Robustness
本篇也是从先验的角度出发的,可以与清华朱军组的那篇对照着分析。
Andrey Malinin (University of Cambridge) · Mark Gales (University of Cambridge)
LINK
https://arxiv.gg363.site/pdf/1905.13472.pdf
ABSTRACT Ensemble approaches for uncertainty estimation have recently been applied to the tasks of misclassification detection, out-of-distribution input detection and adversarial attack detection. Prior Networks have been proposed as an approach to efficiently emulating an ensemble of models by parameterising a Dirichlet prior distribution over output distributions. These models have been shown to outperform ensemble approaches, such as Monte-Carlo Dropout, on the task of out-of-distribution input detection. However, scaling Prior Networks to complex datasets with many classes is difficult using the training criteria originally proposed. This paper makes two contributions. Firstly, we show that the appropriate training criterion for Prior Networks is the reverse KL-divergence between Dirichlet distributions. Using this loss we successfully train Prior Networks on image classification datasets with up to 200 classes and improve out-of-distribution detection performance. Secondly, taking advantage of the new training criterion, this paper investigates using Prior Networks to detect adversarial attacks. It is shown that the construction of successful adaptive whitebox attacks, which affect the prediction and evade detection, against Prior Networks trained on CIFAR-10 and CIFAR-100 takes a greater amount of computational effort than against standard neural networks, adversarially trained neural networks and dropout-defended networks. |
1.25.On Relating Explanations and Adversarial Examples
[Alexey Ignatiev (Reason Lab, Faculty of Sciences, University of Lisbon) · Nina Narodytska (VMWare Research) · Joao Marques-Silva (Reason Lab, Faculty of Sciences, University of Lisbon)]
二、NeurIPS2019 paper分类-Meta-Learning
2.1.Multimodal Model-Agnostic Meta-Learning via Task-Aware Modulation
Risto Vuorio (University of Michigan) · Shao-Hua Sun (University of Southern California) · Hexiang Hu (University of Southern California) · Joseph J Lim (University of Southern California)
2.2.Meta-Learning with Implicit Gradients
Aravind Rajeswaran (University of Washington) · Chelsea Finn (Stanford University) · Sham Kakade (University of Washington) · Sergey Levine (UC Berkeley)
2.3.Learning to Propagate for Graph Meta-Learning
[LU LIU (University of Technology Sydney) · Tianyi Zhou (University of Washington, Seattle) · Guodong Long (University of Technology Sydney) · Jing Jiang (University of Technology Sydney) · Chengqi Zhang (University of Technology Sydney)]
2.4.Efficient Meta Learning via Minibatch Proximal Update
Pan Zhou (National University of Singapore) · Xiaotong Yuan (Nanjing University of Information Science & Technology) · Huan Xu (Alibaba Group) · Shuicheng Yan (National University of Singapore) · Jiashi Feng (National University of Singapore)
2.5.Self-Supervised Generalisation with Meta Auxiliary Learning
Shikun Liu (Imperial College London) · Andrew Davison (Imperial College London) · Edward Johns (Imperial College London)
LINK:
https://arxiv.gg363.site/pdf/1901.08933.pdf
CODE:
https://github.com/lorenmt/maxl
ABSTRACT Learning with auxiliary tasks can improve the ability of a primary task to generalise. However, this comes at the cost of manually labelling auxiliary data. We propose a new method which automatically learns appropriate labels for an auxiliary task, such that any supervised learning task can be improved without requiring access to any further data. The approach is to train two neural networks: a label-generation network to predict the auxiliary labels, and a multi-task network to train the primary task alongside the auxiliary task. The loss for the label-generation network incorporates the loss of the multi-task network, and so this interaction between the two networks can be seen as a form of meta learning with a double gradient. We show that our proposed method, Meta AuXiliary Learning (MAXL), outperforms single-task learning on 7 image datasets, without requiring any additional data. We also show that MAXL outperforms several other baselines for generating auxiliary labels, and is even competitive when compared with human-defined auxiliary labels. The self-supervised nature of our method leads to a promising new direction towards automated generalisation. Source code is available at this https URL(https://github.com/lorenmt/maxl). |
2.6.Meta-Learning Representations for Continual Learning
Khurram Javed (University of Alberta) · Martha White (University of Alberta)
LINK
https://arxiv.gg363.site/pdf/1905.12588.pdf
ABSTRACT A continual learning agent should be able to build on top of existing knowledge to learn on new data quickly while minimizing forgetting. Current intelligent systems based on neural network function approximators arguably do the opposite—they are highly prone to forgetting and rarely trained to facilitate future learning. One reason for this poor behavior is that they learn from a representation that is not explicitly trained for these two goals. In this paper, we propose MRCL, an objective to explicitly learn representations that accelerate future learning and are robust to forgetting under online updates in continual learning. The idea is to optimize the representation such that online updates minimize error on all samples with little forgetting. We show that it is possible to learn representations that are more effective for online updating and that sparsity naturally emerges in these representations. Moreover, our method is complementary to existing continual learning strategies, like MER, which can learn more effectively from representations learned by our objective. Finally, we demonstrate that a basic online updating strategy with our learned representation is competitive with rehearsal based methods for continual learning. We release an implementation of our method at this https URL(https://github.com/khurramjaved96/mrcl) . |
2.7.Adaptive Gradient-Based Meta-Learning Methods
Mikhail Khodak (CMU) · Maria-Florina Balcan (Carnegie Mellon University) · Ameet Talwalkar (CMU)
LINK
https://arxiv.gg363.site/pdf/1906.02717.pdf
ABSTRACT We build a theoretical framework for understanding practical meta-learning methods that enables the integration of sophisticated formalizations of task-similarity with the extensive literature on online convex optimization and sequential prediction algorithms. Our approach enables the task-similarity to be learned adaptively, provides sharper transfer-risk bounds in the setting of statistical learning-to-learn, and leads to straightforward derivations of average-case regret bounds for efficient algorithms in settings where the task-environment changes dynamically or the tasks share a certain geometric structure. We use our theory to modify several popular meta-learning algorithms and improve their training and meta-test-time performance on standard problems in few-shot and federated deep learning. |
2.8.SMILe: Scalable Meta Inverse Reinforcement Learning through Context-Conditional Policies
Seyed Kamyar Seyed Ghasemipour (University of Toronto) · Shixiang (Shane) Gu (Google Brain) · Richard Zemel (Vector Institute/University of Toronto)
2.9.Reconciling meta-learning and continual learning with online mixtures of tasks
Ghassen Jerfel (Duke University) · Erin Grant (UC Berkeley) · Thomas Griffiths (Princeton University) · Katherine Heller (Google)
LINK
https://arxiv.gg363.site/pdf/1812.06080.pdf
ABSTRACT Learning-to-learn or meta-learning leverages data-driven inductive bias to increase the efficiency of learning on a novel task. This approach encounters difficulty when transfer is not advantageous, for instance, when tasks are considerably dissimilar or change over time. We use the connection between gradient-based meta-learning and hierarchical Bayes to propose a Dirichlet process mixture of hierarchical Bayesian models over the parameters of an arbitrary parametric model such as a neural network. In contrast to consolidating inductive biases into a single set of hyperparameters, our approach of task-dependent hyperparameter selection better handles latent distribution shift, as demonstrated on a set of evolving, image-based, few-shot learning benchmarks. |
ABSTRACT
Learning-to-learn or meta-learning leverages data-driven inductive bias to increase the efficiency of learning on a novel task. This approach encounters difficulty when transfer is not advantageous, for instance, when tasks are considerably dissimilar or change over time. We use the connection between gradient-based meta-learning and hierarchical Bayes to propose a Dirichlet process mixture of hierarchical Bayesian models over the parameters of an arbitrary parametric model such as a neural network. In contrast to consolidating inductive biases into a single set of hyperparameters, our approach of task-dependent hyperparameter selection better handles latent distribution shift, as demonstrated on a set of evolving, image-based, few-shot learning benchmarks.
2.10.Guided Meta-Policy Search
Russell Mendonca (UC Berkeley) · Abhishek Gupta (University of California, Berkeley) · Rosen Kralev (UC Berkeley) · Pieter Abbeel (UC Berkeley Covariant) · Sergey Levine (UC Berkeley) · Chelsea Finn (Stanford University)
LINK
https://arxiv.gg363.site/pdf/1904.00956.pdf
ABSTRACT Reinforcement learning (RL) algorithms have demonstrated promising results on complex tasks, yet often require impractical numbers of samples because they learn from scratch. Meta-RL aims to address this challenge by leveraging experience from previous tasks in order to more quickly solve new tasks. However, in practice, these algorithms generally also require large amounts of on-policy experience during the meta-training process, making them impractical for use in many problems. To this end, we propose to learn a reinforcement learning procedure through imitation of expert policies that solve previously-seen tasks. This involves a nested optimization, with RL in the inner loop and supervised imitation learning in the outer loop. Because the outer loop imitation learning can be done with off-policy data, we can achieve significant gains in meta-learning sample efficiency. In this paper, we show how this general idea can be used both for meta-reinforcement learning and for learning fast RL procedures from multi-task demonstration data. The former results in an approach that can leverage policies learned for previous tasks without significant amounts of on-policy data during meta-training, whereas the latter is particularly useful in cases where demonstrations are easy for a person to provide. Across a number of continuous control meta-RL problems, we demonstrate significant improvements in meta-RL sample efficiency in comparison to prior work as well as the ability to scale to domains with visual observations. |
2.11.Systematic generalization through meta sequence-to-sequence learning
Brenden Lake (New York University)
LINK
https://arxiv.gg363.site/pdf/1906.05381.pdf
ABSTRACT People can learn a new concept and use it compositionally, understanding how to “blicket twice” after learning how to “blicket.” In contrast, powerful sequence-to-sequence (seq2seq) neural networks fail such tests of compositionality, especially when composing new concepts together with existing concepts. In this paper, I show that neural networks can be trained to generalize compositionally through meta seq2seq learning. In this approach, models train on a series of seq2seq problems to acquire the compositional skills needed to solve new seq2seq problems. Meta se2seq learning solves several of the SCAN tests for compositional learning and can learn to apply rules to variables. |
2.12.Meta Learning with Relational Information for Short Sequences
Yujia Xie (Georgia Institute of Technology) · Haoming Jiang (Georgia Institute of Technology) · Feng Liu (Florida Atlantic University) · Tuo Zhao (Georgia Tech) · Hongyuan Zha (Georgia Tech)
LINK
https://arxiv.gg363.site/pdf/1909.02105.pdf
ABSTRACT This paper proposes a new meta-learning method – named HARMLESS (HAwkes Relational Meta LEarning method for Short Sequences) for learning heterogeneous point process models from short event sequence data along with a relational network. Specifically, we propose a hierarchical Bayesian mixture Hawkes process model, which naturally incorporates the relational information among sequences into point process modeling. Compared with existing methods, our model can capture the underlying mixed-community patterns of the relational network, which simultaneously encourages knowledge sharing among sequences and facilitates adaptive learning for each individual sequence. We further propose an efficient stochastic variational meta expectation maximization algorithm that can scale to large problems. Numerical experiments on both synthetic and real data show that HARMLESS outperforms existing methods in terms of predicting the future events. |
2.13.Unsupervised Meta Learning for Few-Show Image Classification
Siavash Khodadadeh (University of Central Florida) · Ladislau Boloni (University of Central Florida) · Mubarak Shah (University of Central Florida)
LINK
https://arxiv.org/pdf/1811.11819.pdf
ABSTRACT Few-shot or one-shot learning of classifiers for images or videos is an important next frontier in computer vision. The extreme paucity of training data means that the learning must start with a significant inductive bias towards the type of task to be learned. One way to acquire this is by meta-learning on tasks similar to the target task. However, if the meta-learning phase requires labeled data for a large number of tasks closely related to the target task, it not only increases the difficulty and cost, but also conceptually limits the approach to variations of well-understood domains. In this paper, we propose UMTRA, an algorithm that performs meta-learning on an unlabeled dataset in an unsupervised fashion, without putting any constraint on the classifier network architecture. The only requirements towards the dataset are: sufficient size, diversity and number of classes, and relevance of the domain to the one in the target task. Exploiting this information, UMTRA generates synthetic training tasks for the meta-learning phase. We evaluate UMTRA on few-shot and one-shot learning on both image and video domains. To the best of our knowledge, we are the first to evaluate meta-learning approaches on UCF-101. On the Omniglot and Mini-Imagenet few-shot learning benchmarks, UMTRA outperforms every tested approach based on unsupervised learning of representations, while alternating for the best performance with the recent CACTUs algorithm. Compared to supervised model-agnostic meta-learning approaches, UMTRA trades off some classification accuracy for a vast decrease in the number of labeled data needed. For instance, on the five-way one-shot classification on the Omniglot, we retain 85% of the accuracy of MAML, a recently proposed supervised meta-learning algorithm, while reducing the number of required labels from 24005 to 5. |
2.14.Unsupervised Curricula for Visual Meta-Reinforcement Learning
[Allan Jabri (UC Berkeley) · Kyle Hsu (University of Toronto) · Ben Eysenbach (Carnegie Mellon University) · Abhishek Gupta (University of California, Berkeley) · Alexei Efros (UC Berkeley) · Sergey Levine (UC Berkeley) · Chelsea Finn (Stanford University)]
2.15.Meta-Inverse Reinforcement Learning with Probabilistic Context Variables
Lantao Yu (Stanford University) · Tianhe Yu (Stanford University) · Chelsea Finn (Stanford University) · Stefano Ermon (Stanford)
2.16.Neural Relational Inference with Fast Modular Meta-learning
Ferran Alet (MIT) · Erica Weng (MIT) · Tomás Lozano-Pérez (MIT) · Leslie Kaelbling (MIT)
2.17.MetaInit: Initializing learning by learning to initialize
Yann Dauphin (Google AI) · Samuel Schoenholz (Google Brain)
2.18.Online-Within-Online Meta-Learning
Giulia Denevi (IIT/UNIGE) · Dimitris Stamos (University College London) · Carlo Ciliberto (Imperial College London) · Massimiliano Pontil (IIT & UCL)
2.19.Metalearned Neural Memory
Tsendsuren Munkhdalai (Microsoft Research) · Alessandro Sordoni (Microsoft Research Montreal) · TONG WANG (Microsoft Research Montreal) · Adam Trischler (Microsoft)
LINK
https://arxiv.gg363.site/pdf/1907.09720.pdf
ABSTRACT We augment recurrent neural networks with an external memory mechanism that builds upon recent progress in metalearning. We conceptualize this memory as a rapidly adaptable function that we parameterize as a deep neural network. Reading from the neural memory function amounts to pushing an input (the key vector) through the function to produce an output (the value vector). Writing to memory means changing the function; specifically, updating the parameters of the neural network to encode desired information. We leverage training and algorithmic techniques from metalearning to update the neural memory function in one shot. The proposed memory-augmented model achieves strong performance on a variety of learning problems, from supervised question answering to reinforcement learning. |