模型量化看这一篇就够了

1.首先在dockerhub上下载镜像

docker pull openvino/ubuntu18_dev

然后以root用户进入镜像


#docker run -it -v /本地路径/:/容器内路径/ -p 8777:22 -u root --name=open openvino/ubuntu18_dev:latest /bin/bash

再建立软链接

cd /usr/local/bin/
ln -s /opt/intel/openvino_2021.4.689/deployment_tools/model_optimizer/mo.py mo

在这里插入图片描述

mo --input_model model.onnx --output_dir / --model_name  cpu_int8_openvino --data_type FP32 --input_shape [1,3,112,112] 

转openvino(FP32)
模型量化看这一篇就够了_第1张图片

参考文献:https://copyfuture.com/blogs-details/202112151832462819

分类任务生成annotation.txt文件以及labels.txt共两个文件脚本代码
注:datasets目录下放每个图片类的文件夹

import os
import glob
image_dir = "./datasets/"
import pdb
pdb.set_trace()
assert os.path.exists(image_dir), "image dir does not exist..."
img_list = glob.glob(os.path.join(image_dir, "*", "*.jpg"))
assert len(img_list) > 0, "No images(.jpg) were found in image dir..."
classes_info = os.listdir(image_dir)
classes_info.sort()
classes_dict = {}
# create label file
with open("my_labels.txt", "w") as lw:
# 注意,没有背景时,index要从0开始
    for index, c in enumerate(classes_info, start=0):
        txt = "{}:{}".format(index, c)
        if index != len(classes_info):
            txt += "\n"
            lw.write(txt)
            classes_dict.update({
            c: str(index)})
            print("create my_labels.txt successful...")
# create annotation file
with open("my_annotation.txt", "w") as aw:
    for img in img_list:
        img_classes = classes_dict[img.split("/")[-2]]
        txt = "{} {}".format(img, img_classes)
        if index != len(img_list):
            txt += "\n"
        aw.write(txt)
print("create my_annotation.txt successful...")
models:
  - name: cpumodel

    launchers:
      - framework: dlsdk
        device: CPU
        adapter: classification

    datasets:
      - name: classification_dataset
        data_source: /facefatas #所有图片放到这个文件夹下
        annotation_conversion:
          converter: imagenet
          annotation_file: /opt/intel/openvino_2021.4.689/my_annotation.txt

        preprocessing:
          - type: resize
            size: 112
#          - type: crop
#            size: 224

        metrics:
          - name: accuracy@top1
            type: accuracy
            top_k: 1

#          - name: accuracy@top5
#            type: accuracy
#            top_k: 5

/* This configuration file is the fastest way to get started with the default
quantization algorithm. It contains only mandatory options with commonly used
values. All other options can be considered as an advanced mode and requires
deep knowledge of the quantization process. An overall description of all possible
parameters can be found in the default_quantization_spec.json */

{
    /* Model parameters */

    "model": {
        "model_name": "cpumodel", // Model name
        "model": "/cpumodel.xml", // Path to model (.xml format)
        "weights": "/cpumodel.bin" // Path to weights (.bin format)
    },

    /* Parameters of the engine used for model inference */

    "engine": {
        "config": "./examples/accuracy_checker/cpumodel.yaml" // Path to Accuracy Checker config
    },

    /* Optimization hyperparameters */

    "compression": {
        "target_device": "CPU", // Target device, the specificity of which will be taken
                                // into account during optimization
        "algorithms": [
            {
                "name": "DefaultQuantization", // Optimization algorithm name
                "params": {
                    "preset": "performance", // Preset [performance, mixed, accuracy] which control the quantization
                                             // mode (symmetric, mixed (weights symmetric and activations asymmetric)
                                             // and fully asymmetric respectively)

                    "stat_subset_size": 1000  // Size of subset to calculate activations statistics that can be used
                                             // for quantization parameters calculation
                }
            }
        ]
    }
}

模型量化看这一篇就够了_第2张图片
在这里插入图片描述

你可能感兴趣的:(pytorch,pytorch)