自然语言处理攻击(attack) 论文收集

acl 2021

Defense against Synonym Substitution-based Adversarial Attacks via Dirichlet Neighborhood Ensemble

A Sweet Rabbit Hole by DARCY: Using Honeypots to Detect Universal Trigger’s Adversarial Attacks

Rethinking Stealthiness of Backdoor Attack against NLP Models

Turn the Combination Lock: Learnable Textual Backdoor Attacks via Word Substitution

Hidden Killer: Invisible Textual Backdoor Attacks with Syntactic Trigger

An Empirical Study on Adversarial Attack on NMT: Languages and Positions Matter

Using Adversarial Attacks to Reveal the Statistical Bias in Machine Reading Comprehension Models

OutFlip: Generating Examples for Unknown Intent Detection with Natural Language Attack

Putting words into the system’s mouth: A targeted attack on neural machine translation using monolingual data poisoning

BERT-Defense: A Probabilistic Model Based on BERT to Combat Cognitively Inspired Orthographic Adversarial Attacks

Counter-Argument Generation by Attacking Weak Premises

aaai 2021

Bigram and Unigram Based Text Attack via Adaptive Monotonic Heuristic Search.

A Unified Multi-Scenario Attacking Network for Visual Object Tracking.

Deep Feature Space Trojan Attack of Neural Networks by Controlled Detoxification.

Towards Universal Physical Attacks on Single Object Tracking.

Modeling Deep Learning Based Privacy Attacks on Physical Mail.

Beating Attackers At Their Own Games: Adversarial Example Detection Using Adversarial Gradient Directions.

Learning to Attack Real-World Models for Person Re-identification via Virtual-Guided Meta-Learning.

Defending against Contagious Attacks on a Network with Resource Reallocation.

UAG: Uncertainty-aware Attention Graph Neural Network for Defending Adversarial Attacks.

Intrinsic Certified Robustness of Bagging against Data Poisoning Attacks.

Sequential Attacks on Kalman Filter-based Forward Collision Warning Systems.

Composite Adversarial Attacks.

Exacerbating Algorithmic Bias through Fairness Attacks.

Uncertainty-Matching Graph Neural Networks to Defend Against Poisoning Attacks.

PID-Based Approach to Adversarial Attacks.

Towards Feature Space Adversarial Attack by Style Perturbation.

DeHiB: Deep Hidden Backdoor Attack on Semi-supervised Learning via Adversarial Perturbation.

Characterizing the Evasion Attackability of Multi-label Classifiers..

Are Adversarial Examples Created Equal? A Learnable Weighted Minimax Risk for Robustness under Non-uniform Attacks.

Improving Robustness to Model Inversion Attacks via Mutual Information Regularization.

Generating Natural Language Attacks in a Hard Label Black Box Setting.

Adversarial Training with Fast Gradient Projection Method against Synonym Substitution Based Text Attacks.

Enabling Fast and Universal Audio Adversarial Attack Using Generative Model.

EvaLDA: Efficient Evasion Attacks Towards Latent Dirichlet Allocation.

A Context Aware Approach for Generating Natural Language Attacks.

DeepRobust: a Platform for Adversarial Attacks and Defenses.

SkeletonVis: Interactive Visualization for Understanding Adversarial Attacks on Human Action Recognition Models.

ICLR 2021

A Panda? No, It's a Sloth: Slowdown Attacks on Adaptive Multi-Exit Neural Network Inference

Improving VAEs' Robustness to Adversarial Attack

Efficient Certified Defenses Against Patch Attacks on Image Classifiers

Targeted Attack against Deep Neural Networks via Flipping Limited Weight Bits

Effective and Efficient Vote Attack on Capsule Networks

Policy-Driven Attack: Learning to Query for Hard-label Black-box Adversarial Examples

Deep Partition Aggregation: Provable Defenses against General Poisoning Attacks

LowKey: Leveraging Adversarial Attacks to Protect Social Media Users from Facial Recognition

R-GAP: Recursive Gradient Attack on Privacy

icml 2021

Just How Toxic is Data Poisoning? A Unified Benchmark for Backdoor and Data Poisoning Attacks

Making Paper Reviewing Robust to Bid Manipulation Attacks

Robust Testing and Estimation under Manipulation Attacks

Query Complexity of Adversarial Attacks

Expressive 1-Lipschitz Neural Networks for Robust Multiple Graph Learning against Adversarial Attacks

Knowledge Enhanced Machine Learning Pipeline against Diverse Adversarial Attacks

Defense against backdoor attacks via robust covariance estimation

Label-Only Membership Inference Attacks

Robust Learning for Data Poisoning Attacks

Mind the Box: l1-APGD for Sparse Adversarial Attacks on Image Classifiers

你可能感兴趣的:(自然语言处理攻击(attack) 论文收集)