一个用来调节loss weights的函数

def sigmoid_rampup(current_epoch):
    current = np.clip(current_epoch, 0.0, 15.0)
    phase = 1.0 - current / 15.0
    return np.exp(-5.0 * phase * phase).astype(np.float32)

if __name__ == '__main__':
    x = np.linspace(1,20, num=20)
    y = sigmoid_rampup(x)
    plt.plot(x,y)
    plt.show()

一个用来调节loss weights的函数_第1张图片

 在SE-SSD 的代码中看到了 sigmoid_rampup 函数, 它用来调节 consistency_weight 在整个loss中的权重. 

这个思想来自于 Antti Tarvainen and Harri Valpola. Mean teachers are better role models: Weight-averaged consistency targets improve semi-supervised deep learning results. In NeurIPS, pages
1195–1204, 2017. 2, 6

Training details We adopt the ADAM optimizer and cosine annealing learning rate [18] with a batch size of four for 60 epochs. We follow [27] to ramp up µt (Eq. (5)) from 0 to 1 in the first 15 epoches using a sigmoid-shaped function e−5(1−x)2. 

e^{-5(1-x)^{2}}

consistency_weight = 1.0 * self.sigmoid_rampup(self.epoch)

你可能感兴趣的:(深度学习)