论文阅读笔记:Threat of Adversarial Attacks on Deep Learning in Computer Vision: A Survey

论文阅读笔记:Threat of Adversarial Attacks on Deep Learning in Computer Vision: A Survey

要点

  1. (universal adversarial perturbation)[1]
  2. 3D物理对抗样本

对抗样本生成

  1. Box-constrained L-BFGS
    minρ min ρ

  2. Fast Gradient Sign Method (FGSM)
    ρ=ϵsign(J(θ,Ic,l)) ρ = ϵ s i g n ( ∇ J ( θ , I c , l ) )
    one-shot 生成方式要求 ϵ ϵ 不能太小

  3. Basic Iterative Method (BIM)
    Ik+1=clip(Ik+αsign(J(θ,Ik,l)) I k + 1 = c l i p ( I k + α s i g n ( ∇ J ( θ , I k , l ) )

  4. Iterative Least-likely Class Method (ILCM)
    选取测试概率最低的那个类作为target class进行BIM targeted adversarial example generation

  5. Jacobian-based Saliency Map Attack (JSMA)
  6. One Pixel Attack
  7. Carlini and Wagner Attacks (C&W)
  8. DeepFool
  9. Universal Adversarial Perturbations
    对几乎所有输入均有效、不依赖于输入的对抗扰动
  10. UPSET(Universal Perturbations for Steering to Exact Targets)
  11. ANGRI(Antagonistic Network for Generating Rogue Images)

对抗训练

参考文献

[1]

你可能感兴趣的:(adversarial,learning)