三值网络--Trained Ternary Quantization

Trained Ternary Quantization
ICLR 2017
https://github.com/TropComplique/trained-ternary-quantization pytorch
https://github.com/buaabai/Ternary-Weights-Network pytorch

传统的二值网络将权重 W 量化为 +1、-1; 三值网络 TWN (Ternary weight networks) 将权重W 量化为 {−W_l ,0,+W_l }
三值网络--Trained Ternary Quantization_第1张图片
阈值的计算公式如下所示
在这里插入图片描述
本文提出了新的三值网络
三值网络--Trained Ternary Quantization_第2张图片
positive and negative weights,三个不同的值用于表示三值网络,这个正负权值是通过网络学习得到的
对应的梯度计算如下
在这里插入图片描述
三值网络--Trained Ternary Quantization_第3张图片
本文的阈值选择采用:
在这里插入图片描述
set t to 0.05 in experiments on CIFAR-10 and ImageNet dataset

The quantization roughly proceeds as follows.

  1. Train a model of your choice as usual (or take a trained model).

  2. Copy all full precision weights that you want to quantize. Then do the initial quantization:
    in the model replace them by ternary values {-1, 0, +1} using some heuristic.

  3. Repeat until convergence:
    1). Make the forward pass with the quantized model. 使用量化后的网络进行前向计算
    2). Compute gradients for the quantized model. 对量化网络进行梯度计算
    3). Preprocess the gradients and apply them to the copy of full precision weights. 使用梯度更新网络模型的权重
    4). Requantize the model using the changed full precision weights. 对新的权重进行量化

  4. Throw away the copy of full precision weights and use the quantized model.

三值网络--Trained Ternary Quantization_第4张图片
三值网络--Trained Ternary Quantization_第5张图片
三值网络--Trained Ternary Quantization_第6张图片

11

你可能感兴趣的:(模型优化加速,CNN网络模型压缩和量化)