再读快速梯度符号方法(FGSM)

2022-02-04

Sec. 3: THE LINEAR EXPLANATION OF ADVERSARIAL EXAMPLES

Consider the dot product between a weight vector and an adversarial example : The adversarial perturbation causes the activation to grow by . We can maximize this increase subject to the max norm constraint on by assigning .

  1. 这里,如果,怎能满足?从下文来看应该是.
    If has dimensions and the average magnitude of an element of the weight vector is , then the activation will grow by .
  2. 这里的目标优化式应为: 由于须满足,那么的最大值最大只能为了;同时,为了最大化,需要保证的符号与一致,因此,。

注:什么是Max norm constraints?下面是来自CS231n课程的答案:
Max norm constraints. Another form of regularization is to enforce an absolute upper bound on the magnitude of the weight vector for every neuron and use projected gradient descent to enforce the constraint. In practice, this corresponds to performing the parameter update as normal, and then enforcing the constraint by clamping the weight vector of every neuron to satisfy . Typical values of are on orders of 3 or 4. Some people report improvements when using this form of regularization. One of its appealing properties is that network cannot “explode” even when the learning rates are set too high because the updates are always bounded.

Sec 4: LINEAR PERTURBATION OF NON-LINEAR MODELS

Let be the parameters of a model, the input to the model, the targets associated with (for machine learning tasks that have targets) and be the cost used to train the neural network. We can linearize the cost function around the current value of , obtaining an optimal max-norm constrained perturbation of
这里,有如下几个问题:

  1. 为什么是对求导呢?因为我们要扰动的是.
  2. 在Sec. 3中是取的,这里为什么是取的?可以简单的理解为:如果是线性分类器的话,的结果就是参数。

你可能感兴趣的:(再读快速梯度符号方法(FGSM))