python atlasutil库依赖pyav, numpy和PIL。在运行环境中需要安装这些第三方库
a. 安装pyav
安装其他依赖:
sudo pip3.7.5 Cython
sudo apt-get install pkg-config libxcb-shm0-dev libxcb-xfixes0-dev
sudo cp /usr/local/ffmpeg/lib/pkgconfig/* /usr/share/pkgconfig/
vim ~/.bashrc
添加:
export PKG_CONFIG_PATH=/usr/share/pkgconfig/
保存退出
source ~/.bashrc
源码安装pyav
git clone https://gitee.com/mirrors/PyAV.git
cd PyAV
python3.7.5 setup.py build --ffmpeg-dir=/usr/local/ffmpeg
sudo -E python3.7.5 setup.py install
测试pyav安装是否成功
cd ..
python3.7.5
import av
注意:不要再PyAv目录下测试,否则报错
b. 安装PIL
sudo pip3.7.5 install Pillow
本模型基于开源框架PyTorch训练的VGG16进行模型转换。
使用PyTorch将模型权重文件.pth转换为.onnx文件,再使用ATC工具将.onnx文件转为离线推理模型文件.om文件。
获取权重文件。
单击Link在PyTorch开源预训练模型中获取vgg16-397923af.pth权重文件。
导出onnx文件。
vgg16_pth2onnx.py脚本将.pth文件转换为.onnx文件,执行如下命令。
python3.7 vgg16_pth2onnx.py ./vgg16-397923af.pth ./vgg16.onnx
第一个参数为输入权重文件路径,第二个参数为输出onnx文件路径。运行成功后,在当前目录生成vgg16.onnx模型文件。
使用ATC工具将.onnx文件转换为.om文件,需要.onnx算子版本需为11。在vgg16_pth2onnx.py脚本中torch.onnx.export方法中的输入参数opset_version的值需为11,请勿修改。
使用ATC工具将ONNX模型转OM模型。
a. 修改vgg16_atc.sh脚本,通过ATC工具使用脚本完成转换,具体的脚本示例如下:
# 配置环境变量
export install_path=/usr/local/Ascend/ascend-toolkit/latest
export PATH=/usr/local/python3.7.5/bin:${install_path}/atc/ccec_compiler/bin:${install_path}/atc/bin:$PATH
export PYTHONPATH=${install_path}/atc/python/site-packages:$PYTHONPATH
export LD_LIBRARY_PATH=${install_path}/atc/lib64:${install_path}/acllib/lib64:$LD_LIBRARY_PATH
export ASCEND_OPP_PATH=${install_path}/opp
# 使用二进制输入时,执行如下命令
atc --model=./vgg16.onnx --framework=5 --output=vgg16_bs1 --input_format=NCHW --input_shape="actual_input_1:1,3,224,224" --log=info --soc_version=Ascend310
**说明**: vgg16_atc.sh脚本中环境变量仅供参考,请以实际安装环境配置环境变量。详细介绍请参见《[CANN V100R020C20 开发辅助工具指南 (推理)](https://support.huawei.com/enterprise/zh/doc/EDOC1100180777/6dfa6beb)》。 - 参数说明: - --model:为ONNX模型文件。 - --framework:5代表ONNX模型。 - --output:输出的OM模型。 - --input_format:输入数据的格式。 - --input_shape:输入数据的shape。
b. 执行vgg16_atc.sh脚本,将.onnx文件转为离线推理模型文件.om文件。
bash vgg16_atc.sh
运行成功后生成vgg16_bs1.om。用户可以从网盘(提取码:u9uh) 下载转换好的模型。
vgg16_imagenet_picture/src$ python3.7.5 main.py
init resource stage:
Init resource success
Init model resource start...
[Model] create model output dataset:
malloc output 0, size 4000
Create model output dataset success
Init model resource success
=== banana.jpg===
crop shape = (224, 224, 3)
img shape = (224, 224, 3)
in pre_process, use time:0.12015104293823242
in inference, use time:0.011131048202514648
images:/home/HwHiAiUser/workspace/Samples/python/vgg16_imagenet_picture/src/../data/banana.jpg
======== top5 inference results: =============
label:954 confidence: 10.976562, class: banana
label:506 confidence: 9.156250, class: coil, spiral, volute, whorl, helix
label:767 confidence: 7.738281, class: rubber eraser, rubber, pencil eraser
label:673 confidence: 7.363281, class: mouse, computer mouse
label:488 confidence: 7.250000, class: chain
in post_process, use time:0.0005903244018554688
=== dog.jpg===
crop shape = (224, 224, 3)
img shape = (224, 224, 3)
in pre_process, use time:0.15053510665893555
in inference, use time:0.010085821151733398
images:/home/HwHiAiUser/workspace/Samples/python/vgg16_imagenet_picture/src/../data/dog.jpg
======== top5 inference results: =============
label:158 confidence: 12.117188, class: toy terrier
label:157 confidence: 10.976562, class: papillon
label:264 confidence: 10.726562, class: Cardigan, Cardigan Welsh corgi
label:232 confidence: 10.515625, class: Border collie
label:151 confidence: 10.156250, class: Chihuahua
in post_process, use time:0.0005805492401123047
=== cat.jpg===
crop shape = (224, 224, 3)
img shape = (224, 224, 3)
in pre_process, use time:0.03819561004638672
in inference, use time:0.009888887405395508
images:/home/HwHiAiUser/workspace/Samples/python/vgg16_imagenet_picture/src/../data/cat.jpg
======== top5 inference results: =============
label:283 confidence: 22.625000, class: Persian cat
label:154 confidence: 14.062500, class: Pekinese, Pekingese, Peke
label:287 confidence: 12.828125, class: lynx, catamount
label:281 confidence: 12.101562, class: tabby, tabby cat
label:152 confidence: 11.375000, class: Japanese spaniel
in post_process, use time:0.0005197525024414062
acl resource release all resource
dvpp resource release success
Model release source success
acl resource release stream
acl resource release context
Reset acl device 0
Release acl resource success
https://gitee.com/shiner-chen/Samples/tree/master/python/vgg16_imagenet_picture