DAY6(Auto_quantization build&quantize)

Auto_quantization

  • build:
    ir contains net_property and op_list and weights_dict
    save net property and op list
    op attributes initialized with the weight/bias/scale/shift/lut from the IR (???)
    infer the class num from IR
    forward parameters(reverse_rgb, precision, batch_size, input_scale, input_mean) and build inference op

  • quantize:
    read from the statistical max and min files (or calibrate the max and min)
    update quantize parameters(quantize_method, precision_bits, weight_bits, bias_bits, activation_bits, feature_bits, accum_bits)
    and crop to relu6
    use min_max_values and num_bits, quantize_method to calculate scale and zp

  • self.op_dict_{meta_op.id:meta_op}

  • cur_op.quantize(reshape op)
    cur_op is a class can be directed to different op class based on name
    saved to input scale, output scale, etc

n= 2 **(num_bits-1)-1 (if signed)/ 2 **num_bits -1 (if unsigned)
Symetric:
scale= n/abs_max
zp= np.zeros_like(scale)

  • build inference op

去掉concat操作

  • 在每一层中有不同分支具有不同的scale系数
  • concat操作合并分支
  • 将不同分支的scale系数调成一样后在硬件层面实现
  • 每一层的scale可看_int8.txt文件

你可能感兴趣的:(工作日志)