[论文阅读笔记]The Limitations of Deep Learning in Adversarial Settings

论文题目:The Limitations of Deep Learning in Adversarial Settings
论文地址:https://arxiv.org/abs/1511.07528

JSMA


算法的主要步骤有以下三个:

  1. 计算前向导数;
  2. 基于前向导数构造雅可比显著图
  3. 利用修改输入特征。

Step 1. 计算前向导数

Thus, we apply the chain rule again to obtain:
\begin{aligned} \frac{\partial \mathbf{F}_{j}(\mathbf{X})}{\partial x_{i}}=&\left(\mathbf{W}_{n+1, j} \cdot \frac{\partial \mathbf{H}_{n}}{\partial x_{i}}\right) \times \frac{\partial f_{n+1, j}}{\partial x_{i}}\left(\mathbf{W}_{n+1, j} \cdot \mathbf{H}_{n}+b_{n+1, j}\right) \end{aligned}

Step 2. 构造雅可比显著图

增大输入特征S(\mathbf{X}, t)[i]=\left\{\begin{array}{l} 0 \text { if } \frac{\partial \mathbf{F}_{t}(\mathbf{X})}{\partial \mathbf{X}_{i}}<0 \text { or } \sum_{j \neq t} \frac{\partial \mathbf{F}_{j}(\mathbf{X})}{\partial \mathbf{X}_{i}}>0 \\ \left(\frac{\partial \mathbf{F}_{t}(\mathbf{X})}{\partial \mathbf{X}_{i}}\right)\left|\sum_{j \neq t} \frac{\partial \mathbf{F}_{j}(\mathbf{X})}{\partial \mathbf{X}_{i}}\right| \text { otherwise } \end{array}\right.
减小输入特征
S(\mathbf{X}, t)[i]=\left\{\begin{array}{l} 0 \text { if } \frac{\partial \mathbf{F}_{t}(\mathbf{X})}{\partial \mathbf{X}}>0 \text { or } \sum_{j \neq t} \frac{\partial \mathbf{F}_{j}(\mathbf{X})}{\partial \mathbf{X}_{i}}<0 \\ \left|\frac{\partial \mathbf{F}_{t}(\mathbf{X})}{\partial \mathbf{X}_{i}}\right|\left(\sum_{j \neq t} \frac{\partial \mathbf{F}_{j}(\mathbf{X})}{\partial \mathbf{X}_{i}}\right) \text { otherwise } \end{array}\right.

Step 3. 利用修改输入特征

参考

知乎-[论文笔记] The Limitations of Deep Learning in Adversarial Settings
CSDN-ZQL[论文阅读笔记]The Limitations of Deep Learning in Adversarial Settings
关于The Limitations of Deep Learning in Adversarial Settings的理解

你可能感兴趣的:([论文阅读笔记]The Limitations of Deep Learning in Adversarial Settings)