科研篇(11):ICLR20 分类整理-对抗样本&Meta-Learning

文章目录

  • 一、对抗样本
    • 1.1Enhancing Transformation-Based Defenses Against Adversarial Attacks with a Distribution Classifier .
    • 1.2 Implicit Bias of Gradient Descent based Adversarial Training on Separable Data
    • 1.3 Mixup Inference: Better Exploiting Mixup to Defend Adversarial Attacks
    • 1.4 Rethinking Softmax Cross-Entropy Loss for Adversarial Robustness
    • 1.5 Robust Local Features for Improving the Generalization of Adversarial Training
    • 1.6 Fooling Detection Alone is Not Enough: Adversarial Attack against Multiple Object Tracking
    • 1.7 Improving Adversarial Robustness Requires Revisiting Misclassified Examples
    • 1.8 Adversarial Policies: Attacking Deep Reinforcement Learning
    • 1.9 Detecting and Diagnosing Adversarial Images with Class-Conditional Capsule Reconstructions
    • 1.10 GAT: Generative Adversarial Training for Adversarial Example Detection and Robust Classification
    • 1.11 Black-Box Adversarial Attack with Transferable Model-based Embedding
    • 1.12 Certified Robustness for Top-k Predictions against Adversarial Perturbations via Randomized Smoothing
    • 1.13 Adversarially Robust Representations with Smooth Encoders
    • 1.14 Unpaired Point Cloud Completion on Real Scans using Adversarial Training
    • 1.15 Adversarially robust transfer learning
    • 1.16 Bridging Mode Connectivity in Loss Landscapes and Adversarial Robustness
    • 1.17 Sign-OPT: A Query-Efficient Hard-label Adversarial Attack
    • 1.18 Fast is better than free: Revisiting adversarial training
    • 1.19 Intriguing Properties of Adversarial Training at Scale
    • 1.20 Biologically inspired sleep algorithm for increased generalization and adversarial robustness in deep neural networks
    • 1.21 Jacobian Adversarially Regularized Networks for Robustness
    • 1.22 Certified Defenses for Adversarial Patches
    • 1.23 Nesterov Accelerated Gradient and Scale Invariance for Adversarial Attacks
    • 1.24 Provable robustness against all adversarial lp-perturbations for p ≥ 1
    • 1.25 EMPIR: Ensembles of Mixed Precision Deep Networks for Increased Robustness Against Adversarial Attacks
    • 1.26 MMA Training: Direct Input Space Margin Maximization through Adversarial Training
    • 1.27 BayesOpt Adversarial Attack
    • 1.28 Unrestricted Adversarial Examples via Semantic Manipulation
    • 1.29 BREAKING CERTIFIED DEFENSES: SEMANTIC ADVERSARIAL EXAMPLES WITH SPOOFED ROBUSTNESS CERTIFICATES
    • 1.30 (Spotlight)Skip Connections Matter: On the Transferability of Adversarial Examples Generated with ResNets
    • 1.31 (Spotlight)Enhancing Adversarial Defense by k-Winners-Take-All
    • 1.32 (Spotlight)FreeLB: Enhanced Adversarial Training for Natural Language Understanding
    • 1.33 (Spotlight)ON ROBUSTNESS OF NEURAL ORDINARY DIFFERENTIAL EQUATIONS
    • 1.34 (Oral)Adversarial Training and Provable Defenses: Bridging the Gap
    • 1.35 MACER: ATTACK-FREE AND SCALABLE ROBUST TRAINING VIA MAXIMIZING CERTIFIED RADIUS
    • 1.36 IMPROVED SAMPLE COMPLEXITIES FOR DEEP NETWORKS AND ROBUST CLASSIFICATION VIA AN ALLLAYER MARGIN
    • 1.37 TOWARDS STABLE AND EFFICIENT TRAINING OF VERIFIABLY ROBUST NEURAL NETWORKS
    • 1.38 TRIPLE WINS: BOOSTING ACCURACY, ROBUSTNESS AND EFFICIENCY TOGETHER BY ENABLING INPUTADAPTIVE INFERENCE
    • 1.39 A FRAMEWORK FOR ROBUSTNESS CERTIFICATION OF SMOOTHED CLASSIFIERS USING F-DIVERGENCES
    • 1.40 ROBUSTNESS VERIFICATION FOR TRANSFORMERS

一、对抗样本

1.1Enhancing Transformation-Based Defenses Against Adversarial Attacks with a Distribution Classifier .

PAPER LINK

1.2 Implicit Bias of Gradient Descent based Adversarial Training on Separable Data

PAPER LINK

1.3 Mixup Inference: Better Exploiting Mixup to Defend Adversarial Attacks

PAPER LINK

1.4 Rethinking Softmax Cross-Entropy Loss for Adversarial Robustness

PAPER LINK

1.5 Robust Local Features for Improving the Generalization of Adversarial Training

PAPER LINK

1.6 Fooling Detection Alone is Not Enough: Adversarial Attack against Multiple Object Tracking

PAPER LINK

1.7 Improving Adversarial Robustness Requires Revisiting Misclassified Examples

PAPER LINK

1.8 Adversarial Policies: Attacking Deep Reinforcement Learning

PAPER LINK

1.9 Detecting and Diagnosing Adversarial Images with Class-Conditional Capsule Reconstructions

PAPER LINK

1.10 GAT: Generative Adversarial Training for Adversarial Example Detection and Robust Classification

PAPER LINK

1.11 Black-Box Adversarial Attack with Transferable Model-based Embedding

PAPER LINK

1.12 Certified Robustness for Top-k Predictions against Adversarial Perturbations via Randomized Smoothing

PAPER LINK

1.13 Adversarially Robust Representations with Smooth Encoders

PAPER LINK

1.14 Unpaired Point Cloud Completion on Real Scans using Adversarial Training

PAPER LINK

1.15 Adversarially robust transfer learning

PAPER LINK

1.16 Bridging Mode Connectivity in Loss Landscapes and Adversarial Robustness

PAPER LINK

1.17 Sign-OPT: A Query-Efficient Hard-label Adversarial Attack

PAPER LINK

1.18 Fast is better than free: Revisiting adversarial training

PAPER LINK

1.19 Intriguing Properties of Adversarial Training at Scale

PAPER LINK

1.20 Biologically inspired sleep algorithm for increased generalization and adversarial robustness in deep neural networks

PAPER LINK

1.21 Jacobian Adversarially Regularized Networks for Robustness

PAPER LINK

1.22 Certified Defenses for Adversarial Patches

PAPER LINK

1.23 Nesterov Accelerated Gradient and Scale Invariance for Adversarial Attacks

PAPER LINK

1.24 Provable robustness against all adversarial lp-perturbations for p ≥ 1

PAPER LINK

1.25 EMPIR: Ensembles of Mixed Precision Deep Networks for Increased Robustness Against Adversarial Attacks

PAPER LINK

1.26 MMA Training: Direct Input Space Margin Maximization through Adversarial Training

PAPER LINK

1.27 BayesOpt Adversarial Attack

PAPER LINK

1.28 Unrestricted Adversarial Examples via Semantic Manipulation

PAPER LINK

1.29 BREAKING CERTIFIED DEFENSES: SEMANTIC ADVERSARIAL EXAMPLES WITH SPOOFED ROBUSTNESS CERTIFICATES

PAPER LINK

1.30 (Spotlight)Skip Connections Matter: On the Transferability of Adversarial Examples Generated with ResNets

PAPER LINK

1.31 (Spotlight)Enhancing Adversarial Defense by k-Winners-Take-All

PAPER LINK

1.32 (Spotlight)FreeLB: Enhanced Adversarial Training for Natural Language Understanding

PAPER LINK

1.33 (Spotlight)ON ROBUSTNESS OF NEURAL ORDINARY DIFFERENTIAL EQUATIONS

PAPER LINK

1.34 (Oral)Adversarial Training and Provable Defenses: Bridging the Gap

PAPER LINK

1.35 MACER: ATTACK-FREE AND SCALABLE ROBUST TRAINING VIA MAXIMIZING CERTIFIED RADIUS

PAPER LINK

1.36 IMPROVED SAMPLE COMPLEXITIES FOR DEEP NETWORKS AND ROBUST CLASSIFICATION VIA AN ALLLAYER MARGIN

PAPER LINK

1.37 TOWARDS STABLE AND EFFICIENT TRAINING OF VERIFIABLY ROBUST NEURAL NETWORKS

PAPER LINK

1.38 TRIPLE WINS: BOOSTING ACCURACY, ROBUSTNESS AND EFFICIENCY TOGETHER BY ENABLING INPUTADAPTIVE INFERENCE

PAPER LINK

1.39 A FRAMEWORK FOR ROBUSTNESS CERTIFICATION OF SMOOTHED CLASSIFIERS USING F-DIVERGENCES

PAPER LINK

1.40 ROBUSTNESS VERIFICATION FOR TRANSFORMERS

PAPER LINK

你可能感兴趣的:(科研)