视频地址
EXPLAINING AND HARNESSING ADVERSARIAL EXAMPLES
论文地址:
ADVERSARIAL EXAMPLES IN THE PHYSICAL WORLD
Towards Deep Learning Models Resistant to Adversarial Attacks
Direction II
分错条件下找到扰动最小的对抗样本。
DeepFool: a simple and accurate method to fool deep neural networks
论文地址:
Towards Evaluating the Robustness of Neural Networks
1、Transferability-based Attack
论文地址:
Boosting Adversarial Attacks with Momentum
NESTEROV ACCELERATED GRADIENT AND SCALE INVARIANCE FOR ADVERSARIAL ATTACKS
论文地址:
Towards Understanding and Improving the Transferability of Adversarial Examples in Deep Neural Networks
Evading Defenses to Transferable Adversarial Examples by Translation-Invariant Attacks
Improving Transferability of Adversarial Examples with Input Diversity
SKIP CONNECTIONS MATTER: ON THE TRANSFERABILITY OF ADVERSARIAL EXAMPLES GENERATED WITH RESNETS
2、Query-based Adversarial Attack
论文地址:
DECISION-BASED ADVERSARIAL ATTACKS: RELIABLE ATTACKS AGAINST BLACK-BOX MACHINE LEARNING MODELS
A Ray Searching Method for Hard-label Adversarial Attack
ZOO: Zeroth Order Optimization Based Black-box Attacks to Deep Neural Networks without Training Substitute Models
Auto ZOOM: Autoencoder-based Zeroth Order Optimization Method for Attacking Black-box Neural Networks
Square Attack: a query-efficient black-box adversarial attack via random search
Black-box Adversarial Attacks with Limited Queries and Information
N ATTACK:Learning the Distributions of Adversarial Examples for an Improved Black-Box Attack on Deep Neural Networks
Improving Query Efficiency of Black-box Adversarial Attack
论文地址:
Adversarial Patch
Robust Physical-World Attacks on Deep Learning Visual Classification
论文地址:
Adversarial Camouflage: Hiding Physical-World Attacks with Natural Styles
论文地址:
BadNets: Identifying Vulnerabilities in the Machine Learning Model Supply Chain
Towards Deep Learning Models Resistant to Adversarial Attacks
论文地址:
On the Convergence and Robustness of Adversarial Training
论文地址:
Adversarial Weight Perturbation HelpsRobust Generalization
Visualizing the Loss Landscape of Neural Nets
Understanding Adversarial Robustness Through Loss Landscape Geometries
INTERPRETING ADVERSARIAL ROBUSTNESS:A VIEW FROM DECISION SURFACE IN INPUT SPACE
Theoretically Principled Trade-off between Robustness and Accuracy
论文地址:
IMPROVING ADVERSARIAL ROBUSTNESS REQUIRES REVISITING MISCLASSIFIED EXAMPLES
Adversarial Neuron Pruning Purifies Backdoored Deep Models
论文地址:
Spectral Signatures in Backdoor Attacks
DEEP PARTITION AGGREGATION:PROVABLE DEFENSES AGAINST GENERAL POISONING
ATTACKS
Data Poisoning against Differentially-Private Learners: Attacks and Defenses
STRONG DATA AUGMENTATION SANITIZES POISONING AND BACKDOOR ATTACKS WITHOUT AN ACCURACY TRADEOFF
Neural Cleanse: Identifying and Mitigating Backdoor Attacks in Neural Networks
Fine-Pruning: Defending Against Backdooring Attacks on Deep Neural Networks
REFIT: A Unified Watermark Removal Framework For Deep Learning Systems With Limited Data
论文地址:
Feature Denoising for Improving Adversarial Robustness
论文地址:
Adversarial Examples Improve Image Recognition
论文地址:
Improving Adversarial Robustness via Channel-wise Activation Suppressing
论文地址:
Implicit Euler Skip Connections: Enhancing Adversarial Robustness via Numerical Stability
Unlearnable Examples: Making Personal Data Unexploitable
论文地址:
Unadversarial Examples: Designing Objects for Robust Vision