科研篇(12):CVPR20 分类整理-对抗样本

文章目录

  • 一、对抗样本-附代码
    • 1.1Towards Large yet Imperceptible Adversarial Image Perturbations with Perceptual Color Distance.
    • 1.2 One Man's Trash Is Another Man's Treasure: Resisting Adversarial Examples by Adversarial Examples
    • 1.3 ColorFool: Semantic Adversarial Colorization
    • 1.4 Transferable, Controllable, and Inconspicuous Adversarial Attacks on Person Re-identification With Deep Mis-Ranking
    • 1.5 Adversarial Robustness: From Self-Supervised Pre-Training to Fine-Tuning
    • 1.6 Efficient Adversarial Training with Transferable Adversarial Examples
    • 1.7 Modeling Biological Immunity to Adversarial Examples
    • 1.8 Towards Achieving Adversarial Robustness by Enforcing Feature Consistency Across Bit Planes
    • 1.9 (Oral)A Self-supervised Approach for Adversarial Robustness
    • 1.10 When NAS Meets Robustness: In Search of Robust Architectures against Adversarial Attacks
    • 1.11 Enhancing Intrinsic Adversarial Robustness via Feature Pyramid Decoder
    • 1.12 Robust Design of Deep Neural Networks against Adversarial Attacks based on Lyapunov Theory
    • 1.13 Old is Gold: Redefining the Adversarially Learned One-Class Classifier Training Paradigm
    • 1.14 LG-GAN: Label Guided Adversarial Network for Flexible Targeted Attack of Point Cloud-based Deep Networks
  • 二、对抗样本-无代码
    • 2.1Polishing Decision-Based Adversarial Noise With a Customized Sampling.
    • 2.2 Achieving Robustness in the Wild via Adversarial Mixing With Disentangled Representations
    • 2.3 Single-Step Adversarial Training With Dropout Scheduling.
    • 2.4 Adversarial Vertex Mixup: Toward Better Adversarially Robust Generalization
    • 2.5 Boosting the Transferability of Adversarial Samples via Attention
    • 2.6 Learn2Perturb: An End-to-End Feature Perturbation Learning to Improve Adversarial Robustness
    • 2.7 On Isometry Robustness of Deep 3D Point Cloud Models Under Adversarial Attacks
    • 2.8 Adversarial Examples Improve Image Recognition
    • 2.9 Enhancing Cross-Task Black-Box Transferability of Adversarial Examples With Dispersion Reduction
    • 2.10 Adversarial Camouflage: Hiding Physical-World Attacks With Natural Styles
    • 2.11 Benchmarking Adversarial Robustness on Image Classification
    • 2.11 DaST: Data-Free Substitute Training for Adversarial Attacks
    • 2.12 Ensemble Generative Cleaning With Feedback Loops for Defending Adversarial Attacks
    • 2.13 Exploiting Joint Robustness to Adversarial Perturbations
    • 2.14 GeoDA: A Geometric Framework for Black-Box Adversarial Attacks
    • 2.15 What Machines See Is Not What They Get: Fooling Scene Text Recognition Models With Adversarial Text Images
    • 2.16 Physically Realizable Adversarial Examples for LiDAR Object Detection
    • 2.17 One-Shot Adversarial Attacks on Visual Tracking With Dual Attention
    • 2.18 Defending and Harnessing the Bit-Flip Based Adversarial Weight Attack
    • 2.19 Understanding Adversarial Examples From the Mutual Influence of Images and Perturbations
    • 2.20 Robust Superpixel-Guided Attentional Adversarial Attack
    • 2.21 ILFO: Adversarial Attack on Adaptive Neural Networks
    • 2.22 PhysGAN: Generating Physical-World-Resilient Adversarial Examples for Autonomous Driving
    • 2.23 Detecting Adversarial Samples Using Influence Functions and Nearest Neighbors

一、对抗样本-附代码

1.1Towards Large yet Imperceptible Adversarial Image Perturbations with Perceptual Color Distance.

PAPER LINK
CODE

1.2 One Man’s Trash Is Another Man’s Treasure: Resisting Adversarial Examples by Adversarial Examples

PAPER LINK
CODE

1.3 ColorFool: Semantic Adversarial Colorization

PAPER LINK
CODE

1.4 Transferable, Controllable, and Inconspicuous Adversarial Attacks on Person Re-identification With Deep Mis-Ranking

PAPER LINK
CODE

1.5 Adversarial Robustness: From Self-Supervised Pre-Training to Fine-Tuning

PAPER LINK
CODE

1.6 Efficient Adversarial Training with Transferable Adversarial Examples

PAPER LINK
CODE

1.7 Modeling Biological Immunity to Adversarial Examples

PAPER LINK
CODE

1.8 Towards Achieving Adversarial Robustness by Enforcing Feature Consistency Across Bit Planes

PAPER LINK
CODE

1.9 (Oral)A Self-supervised Approach for Adversarial Robustness

PAPER LINK
CODE

1.10 When NAS Meets Robustness: In Search of Robust Architectures against Adversarial Attacks

PAPER LINK
CODE

1.11 Enhancing Intrinsic Adversarial Robustness via Feature Pyramid Decoder

PAPER LINK
CODE

1.12 Robust Design of Deep Neural Networks against Adversarial Attacks based on Lyapunov Theory

PAPER LINK
CODE

1.13 Old is Gold: Redefining the Adversarially Learned One-Class Classifier Training Paradigm

PAPER LINK
CODE

1.14 LG-GAN: Label Guided Adversarial Network for Flexible Targeted Attack of Point Cloud-based Deep Networks

PAPER LINK
CODE

二、对抗样本-无代码

2.1Polishing Decision-Based Adversarial Noise With a Customized Sampling.

PAPER LINK

2.2 Achieving Robustness in the Wild via Adversarial Mixing With Disentangled Representations

PAPER LINK

2.3 Single-Step Adversarial Training With Dropout Scheduling.

PAPER LINK

2.4 Adversarial Vertex Mixup: Toward Better Adversarially Robust Generalization

PAPER LINK

2.5 Boosting the Transferability of Adversarial Samples via Attention

PAPER LINK

2.6 Learn2Perturb: An End-to-End Feature Perturbation Learning to Improve Adversarial Robustness

PAPER LINK

2.7 On Isometry Robustness of Deep 3D Point Cloud Models Under Adversarial Attacks

PAPER LINK

2.8 Adversarial Examples Improve Image Recognition

PAPER LINK

2.9 Enhancing Cross-Task Black-Box Transferability of Adversarial Examples With Dispersion Reduction

PAPER LINK

2.10 Adversarial Camouflage: Hiding Physical-World Attacks With Natural Styles

PAPER LINK

2.11 Benchmarking Adversarial Robustness on Image Classification

PAPER LINK

2.11 DaST: Data-Free Substitute Training for Adversarial Attacks

PAPER LINK

2.12 Ensemble Generative Cleaning With Feedback Loops for Defending Adversarial Attacks

PAPER LINK

2.13 Exploiting Joint Robustness to Adversarial Perturbations

PAPER LINK

2.14 GeoDA: A Geometric Framework for Black-Box Adversarial Attacks

PAPER LINK

2.15 What Machines See Is Not What They Get: Fooling Scene Text Recognition Models With Adversarial Text Images

PAPER LINK

2.16 Physically Realizable Adversarial Examples for LiDAR Object Detection

PAPER LINK

2.17 One-Shot Adversarial Attacks on Visual Tracking With Dual Attention

PAPER LINK

2.18 Defending and Harnessing the Bit-Flip Based Adversarial Weight Attack

PAPER LINK

2.19 Understanding Adversarial Examples From the Mutual Influence of Images and Perturbations

PAPER LINK

2.20 Robust Superpixel-Guided Attentional Adversarial Attack

PAPER LINK

2.21 ILFO: Adversarial Attack on Adaptive Neural Networks

PAPER LINK

2.22 PhysGAN: Generating Physical-World-Resilient Adversarial Examples for Autonomous Driving

PAPER LINK

2.23 Detecting Adversarial Samples Using Influence Functions and Nearest Neighbors

PAPER LINK

你可能感兴趣的:(科研)