AES200设备AI Samples 测试文档

AES200设备AI Samples 测试文档

1. 验证设备软件环境

1.1. 检测NPU信息

[root@localhost ~]# npu-smi info
+------------------------------------------------------------------------------------------------+
| npu-smi 22.0.1                           Version: 22.0.2                                       |
+-----------------------+-----------------+------------------------------------------------------+
| NPU     Name          | Health          | Power(W)     Temp(C)           Hugepages-Usage(page) |
| Chip    Device        | Bus-Id          | AICore(%)    Memory-Usage(MB)                        |
+=======================+=================+======================================================+
| 0       310P1         | Warning         | NA           33                0    / 0              |
| 0       0             | NA              | 0            1138 / 43195                            |
+=======================+=================+======================================================+

1.2. 检测python版本

[root@localhost ~]# python3 --version
Python 3.7.9

1.3. 检测是否安装toolkit

[root@localhost ~]# ls /usr/local/Ascend/ascend-toolkit/
5.1  5.1.RC2.alpha007  latest  set_env.sh

1.4. 检测环境变量设置

[root@localhost ~]# cat ~/.bashrc
# .bashrc

# User specific aliases and functions

alias rm='rm -i'
alias cp='cp -i'
alias mv='mv -i'

# Source global definitions
if [ -f /etc/bashrc ]; then
        . /etc/bashrc
fi


#用于设置python3.7.5库文件路径
#export LD_LIBRARY_PATH=/usr/local/python3.7.5/lib:$LD_LIBRARY_PATH
#如果用户环境存在多个python3版本,则指定使用python3.7.5版本
#export PATH=/usr/local/python3.7.5/bin:$PATH

export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/lib64/aicpu_kernels/0/aicpu_kernels_device
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/lib64/aicpu_kernels/0/aicpu_kernels_device/sand_box
#添加toolkit环境变量
source /usr/local/Ascend/ascend-toolkit/set_env.sh

1.5. 检查GCC版本

[root@localhost ~]# gcc --version
gcc (GCC) 7.3.0
Copyright (C) 2017 Free Software Foundation, Inc.
This is free software; see the source for copying conditions.  There is NO
warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.

2. 准备Demo运行环境

2.1 准备python代码运行环境

  1. 升级python3

    [root@localhost ~]# yum install python3-3.7.9
    Last metadata expiration check: 3:39:27 ago on Wed 17 Aug 2022 09:06:52 AM CST.
    Package python3-3.7.9-18.oe1.aarch64 is already installed.
    ......
    Upgraded:
      python3-3.7.9-23.oe1.aarch64
    
    Complete!
    
    
  2. 安装python3-devel

    [root@localhost ~]# yum install python3-devel
    ......
    Installed:
      python3-devel-3.7.9-23.oe1.aarch64
    
    Complete!
    
  3. 配置环境变量

      [root@localhost environment]# vi ~/.bashrc
      # 在文件最后一行后面添加如下内容。
      export CPU_ARCH=`arch`
      export THIRDPART_PATH=${HOME}/Ascend/thirdpart/${CPU_ARCH}  #代码编译时链接第三方库
      export PYTHONPATH=${THIRDPART_PATH}/acllite:$PYTHONPATH #设置pythonpath为固定目录
      export INSTALL_DIR=/usr/local/Ascend/ascend-toolkit/latest #CANN软件安装后文件存储路径
      
      [root@localhost environment]# mkdir -p ~/Ascend/thirdpart/aarch64
    
  4. 升级pip

    [root@localhost ~]# python3 -m pip install --upgrade pip --user -i https://mirrors.huaweicloud.com/repository/pypi/simple
    
  5. 安装依赖软件包

    [root@localhost ~]# yum install hdf5
    ......
    Installed:
      hdf5-1.8.20-12.oe1.aarch64                                                                                                          libaec-1.0.4-1.oe1.aarch64
    
    Complete!
    
    
    [root@localhost ~]# yum install hdf5-devel
    ......
    Installed:
      hdf5-devel-1.8.20-12.oe1.aarch64                                                                                             libaec-devel-1.0.4-1.oe1.aarch64
    
    Complete!
    
    
  6. 安装python依赖包

    [root@localhost environment]# python3 -m pip install Cython numpy tornado==5.1.0 protobuf attrs  psutil absl-py  tensorflow pillow --user -i https://mirrors.huaweicloud.com/repository/pypi/simple
    
    Successfully installed MarkupSafe-2.1.1 absl-py-1.2.0 astunparse-1.6.3 attrs-22.1.0 cached-property-1.5.2 cachetools-5.1.0 certifi-2022.6.15 charset-normalizer-2.1.0 flatbuffers-2.0 gast-0.4.0 google-auth-2.10.0 google-auth-oauthlib-0.4.6 google-pasta-0.2.0 grpcio-1.47.0 h5py-3.4.0 idna-3.3 importlib-metadata-4.12.0 keras-2.9.0 keras-preprocessing-1.1.2 libclang-14.0.6 markdown-3.4.1 oauthlib-3.2.0 opt-einsum-3.3.0 packaging-21.0 protobuf-3.19.4 psutil-5.9.1 pyasn1-0.4.8 pyasn1-modules-0.2.8 requests-2.28.1 requests-oauthlib-1.3.1 rsa-4.9 tensorboard-2.9.1 tensorboard-data-server-0.6.1 tensorboard-plugin-wit-1.8.1 tensorflow-2.10.0rc0 tensorflow-cpu-aws-2.10.0rc0 tensorflow-estimator-2.9.0 tensorflow-io-gcs-filesystem-0.26.0 termcolor-1.1.0 typing-extensions-4.2.0 urllib3-1.26.11 werkzeug-2.2.2 wheel-0.37.1 wrapt-1.14.1 zipp-3.8.1
    
    [root@localhost environment]# python3 -m pip install opencv-python --user -i https://mirrors.huaweicloud.com/repository/pypi/simple
    
    Successfully installed opencv-python-4.6.0.66
    WARNING: Running pip as the 'root' user can result in broken permissions and conflicting behaviour with the system package manager. It is recommended to use a virtual environment instead: https://pip.pypa.io/warnings/venv
    
    
    [root@localhost environment]# python3 -m pip install av
    .....
    Successfully installed av-9.2.0
    
    [root@localhost environment]# python3 -m pip install pillow
    ......
    Successfully installed pillow-9.1.1
    
    
    [root@localhost model]# python3 -m pip install sympy
    ......
    Installing collected packages: mpmath, sympy
    Successfully installed mpmath-1.2.1 sympy-1.10.1
    
    
  7. 安装ffmpeg

    [root@localhost environment]# yum install ffmpeg
    
  8. 安装python-acllite

    [root@localhost environment]# cp -r root/workspace/samples/python/common/acllite ${THIRDPART_PATH}
    

2.2. 准备C++代码运行环境

由于opencv和ffmpeg编译耗时,我们准备了编译好的包,可以从下表中下载,然后解压到系统中即可

软件名称 版本 下载链接
Opencv 4.2 opencv-4.2-EulerOS.tgz
Ffmpeg 4.1 ffmpeg-4.1-EulerOS.tar.gz

将压缩包上传到root目录,并解压安装

#解压opencv到根目录,默认会解压到/home/HwHiAiUser/ascend_ddk/arm和目录usr/lib/python3.7/site-packages/cv2
[root@localhost ~]# tar xvf opencv-4.2-EulerOS.tgz -C / 
#解压ffmpeg-4.1-EulerOS.tar.gz,默认解压到/usr/local/ffmpeg目录
[root@localhost ~]# tar xvf ffmpeg-4.1-EulerOS.tar.gz -C /
[root@localhost ~]# cp /usr/local/ffmpeg/lib/pkgconfig/* /usr/share/pkgconfig/

**注意:**如果要在root用户下运行样例,请将home/HwHiAiUser/ascend_ddk目录拷贝一份到/root目录中

[root@localhost ~]# cp /home/HwHiAiUser/ascend_ddk /root

准备环境变量

编辑.bashrc文件
[root@localhost ~]# vim ~/.bashrc
在文件末尾添加:
export LD_LIBRARY_PATH=$HOME/ascend_ddk/arm/lib64/:/usr/local/ffmpeg/lib:$THIRDPART_PATH/lib:$LD_LIBRARY_PATH
export NPU_HOST_LIB=/usr/local/Ascend/ascend-toolkit/latest/acllib/lib64/stub
export DDK_PATH=/usr/local/Ascend/ascend-toolkit/latest
export THIRDPART_PATH=${HOME}/Ascend/thirdpart/${CPU_ARCH} 
export INSTALL_DIR=/usr/local/Ascend/ascend-toolkit/latest
 export CPU_ARCH=`arch`

3. 运行C++ Demo样例

3.1. 编译依赖

前提条件

部署此Sample前,需要准备好以下环境:

  • 请确认已按照2.2准备好环境。

  • 已完成对应产品的开发环境和运行环境安装。

软件准备
  1. 获取源码包。

    可以使用以下两种方式下载,请选择其中一种进行源码准备。

    • 命令行方式下载(下载时间较长,但步骤简单)。

      开发环境,在命令行中执行以下命令下载源码仓。

      cd $HOME

      git clone https://gitee.com/ascend/samples.git

    • 压缩包方式下载(下载时间较短,但步骤稍微复杂)。

      1. samples仓右上角选择 克隆/下载 下拉框并选择 下载ZIP

      2. 将ZIP包上传到开发环境中的普通用户家目录中,例如 $HOME/ascend-samples-master.zip

      3. 开发环境中,执行以下命令,解压zip包。

        cd $HOME

        unzip ascend-samples-master.zip

acllite编译安装

执行以下命令,执行编译脚本,开始acllite编译。

cd $HOME/samples/cplusplus/common/acllite
#按一下差异修改Makefile文件
[root@localhost acllite]# git diff Makefile
diff --git a/cplusplus/common/acllite/Makefile b/cplusplus/common/acllite/Makefile
index 413b7d10a..36aef85eb 100755
--- a/cplusplus/common/acllite/Makefile
+++ b/cplusplus/common/acllite/Makefile
@@ -42,7 +42,8 @@ INC_DIR = \
         -I$(THIRDPART_PATH)/include/ \
         -I$(THIRDPART_PATH)/include/presenter/agent/ \
         -I$(INSTALL_DIR)/runtime/include/ \
-        -I$(INSTALL_DIR)/driver/
+        -I$(INSTALL_DIR)/driver/ \
+       -I/usr/local/ffmpeg/include/

 CC_FLAGS := $(INC_DIR) -DENABLE_DVPP_INTERFACE -std=c++11 -fPIC -Wall -O2
 LNK_FLAGS := \
@@ -50,6 +51,7 @@ LNK_FLAGS := \
     -Wl,-rpath-link=$(THIRDPART_PATH)/lib \
         -L$(INSTALL_DIR)/runtime/lib64/stub \
         -L$(THIRDPART_PATH)/lib \
+       -L/usr/local/ffmpeg/lib \
         -lascendcl \
         -lacl_dvpp \
         -lstdc++ \

make #编译
make install 安装

3.2. 目标识别:YOLOV3_coco_detection_picture_DVPP_with_AIPP样例

功能:使用yolov3模型对输入图片进行预测推理,并将结果打印到输出图片上。

样例输入:原始图片jpg文件。

样例输出:带推理结果的jpg文件。

3.2.1. 前提条件

部署此Sample前,需要准备好以下环境:

  • 请确认已按照2.2和acllite编译安装准备好环境。

  • 已完成对应产品的开发环境和运行环境安装。

3.2.2. 模型转换。
模型名称 模型说明 模型下载路径
yolov3 图片检测推理模型。是基于Caffe的yolov3模型。 请参考https://gitee.com/ascend/modelzoo/tree/master/contrib/TensorFlow/Research/cv/yolov3/ATC_yolov3_caffe_AE目录中README.md下载原始模型章节下载模型和权重文件。
# 为了方便下载,在这里直接给出原始模型下载及模型转换命令,可以直接拷贝执行。也可以参照上表在modelzoo中下载并手工转换,以了解更多细节。   
cd $HOME/samples/cplusplus/level2_simple_inference/2_object_detection/YOLOV3_coco_detection_picture_DVPP_with_AIPP/model     
wget https://obs-9be7.obs.cn-east-2.myhuaweicloud.com/003_Atc_Models/AE/ATC%20Model/Yolov3/yolov3.caffemodel
wget https://obs-9be7.obs.cn-east-2.myhuaweicloud.com/003_Atc_Models/AE/ATC%20Model/Yolov3/yolov3.prototxt
wget https://obs-9be7.obs.cn-east-2.myhuaweicloud.com/models/YOLOV3_coco_detection_picture_DVPP_with_AIPP/aipp_nv12.cfg
atc --model=yolov3.prototxt --weight=yolov3.caffemodel --framework=0 --output=yolov3 --soc_version=Ascend310 --insert_op_conf=aipp_nv12.cfg
3.2.3. 样例编译

根据以下diff,修改代码

[root@localhost YOLOV3_coco_detection_picture_DVPP_with_AIPP]# git diff src/CMakeLists.txt
diff --git a/cplusplus/level2_simple_inference/2_object_detection/YOLOV3_coco_detection_picture_DVPP_with_AIPP/src/CMakeLists.txt b/cplusplus/level2_simple_inference/2_object_detection/YOLOV3_coco_detection_picture_DVPP_with_AIPP/src/CMakeLists.txt
index 13e495896..b8a1c57f5 100644
--- a/cplusplus/level2_simple_inference/2_object_detection/YOLOV3_coco_detection_picture_DVPP_with_AIPP/src/CMakeLists.txt
+++ b/cplusplus/level2_simple_inference/2_object_detection/YOLOV3_coco_detection_picture_DVPP_with_AIPP/src/CMakeLists.txt
@@ -42,6 +42,7 @@ endif()
 include_directories(
     $ENV{INSTALL_DIR}/acllib/include/
     $ENV{THIRDPART_PATH}/include/
+    $ENV{HOME}/ascend_ddk/arm/include/opencv4
     ../inc/
 )

@@ -49,7 +50,9 @@ include_directories(
 link_directories(
     $ENV{INSTALL_DIR}/runtime/lib64/stub
     $ENV{THIRDPART_PATH}/lib/
+    $ENV{HOME}/ascend_ddk/arm/lib64
     $ENV{INSTALL_DIR}/driver
+    /usr/local/ffmpeg/lib
 )

 add_executable(main
[root@localhost YOLOV3_coco_detection_picture_DVPP_with_AIPP]# git diff src/sample_process.cpp
diff --git a/cplusplus/level2_simple_inference/2_object_detection/YOLOV3_coco_detection_picture_DVPP_with_AIPP/src/sample_process.cpp b/cplusplus/level2_simple_inference/2_object_detection/YOLOV3_coco_detection_picture_DVPP_with_AIPP/src/sample_process.cpp
index 8c0512c9d..96f52fa3a 100644
--- a/cplusplus/level2_simple_inference/2_object_detection/YOLOV3_coco_detection_picture_DVPP_with_AIPP/src/sample_process.cpp
+++ b/cplusplus/level2_simple_inference/2_object_detection/YOLOV3_coco_detection_picture_DVPP_with_AIPP/src/sample_process.cpp
@@ -200,7 +200,8 @@ const aclmdlDataset* inferenceOutput, uint32_t idx) {

 void SampleProcess::DrawBoundBoxToImage(vector<BBox>& detectionResults,
 const string& origImagePath) {
-    cv::Mat image = cv::imread(origImagePath, CV_LOAD_IMAGE_UNCHANGED);
+    //cv::Mat image = cv::imread(origImagePath, CV_LOAD_IMAGE_UNCHANGED);
+    cv::Mat image = cv::imread(origImagePath, cv::IMREAD_UNCHANGED);
     for (int i = 0; i < detectionResults.size(); ++i) {
         cv::Point p1, p2;
         p1.x = detectionResults[i].rect.ltX;

执行以下命令,执行编译脚本,开始样例编译。

cd $HOME/samples/cplusplus/level2_simple_inference/2_object_detection/YOLOV3_coco_detection_picture_DVPP_with_AIPP/scripts
bash sample_build.sh
3.2.4. 样例运行

数据准备

cd $HOME/samples/cplusplus/level2_simple_inference/2_object_detection/YOLOV3_coco_detection_picture_DVPP_with_AIPP
mkdir data

自行下载测试图片,我们提供一个测试图片测试图片,然后将图片上传到$HOME/samples/cplusplus/level2_simple_inference/2_object_detection/YOLOV3_coco_detection_picture_DVPP_with_AIPP/data目录

执行运行脚本,开始样例运行。

bash sample_run.sh
3.2.5. 查看结果

运行完成后,会在运行环境的命令行中打印出推理结果。

[INFO]  model execute success, modelId is 1
[INFO]  destroy model input success
353 190 377 211 person83%
530 195 540 220 person68%
469 193 479 216 person58%
336 192 349 209 person55%
223 217 340 345 train99%
[INFO]  unload model success, modelId is 1
[INFO]  destroy model description success
[INFO]  destroy model output success

3.3. 图片分类:googlenet_imagenet_picture样例

功能:使用googlenet模型对输入图片进行分类推理。

样例输入:待推理的jpg图片。

样例输出:推理后的jpg图片。

3.3.1. 前提条件

部署此Sample前,需要准备好以下环境:

  • 请确认已按照2.2和acllite编译安装准备好环境。

  • 已完成对应产品的开发环境和运行环境安装。

3.3.2. 模型转换
模型名称 模型说明 模型下载路径
googlenet 图片分类推理模型。是基于Caffe的GoogLeNet模型。 请参考https://gitee.com/ascend/ModelZoo-TensorFlow/tree/master/TensorFlow/contrib/cv/googlenet/ATC_googlenet_caffe_AE目录中README.md下载原始模型章节下载模型和权重文件。
# 为了方便下载,在这里直接给出原始模型下载及模型转换命令,可以直接拷贝执行。也可以参照上表在modelzoo中下载并手工转换,以了解更多细节。

cd ${HOME}/samples/cplusplus/level2_simple_inference/1_classification/googlenet_imagenet_picture/model
wget https://obs-9be7.obs.cn-east-2.myhuaweicloud.com/003_Atc_Models/AE/ATC%20Model/classification/googlenet.caffemodel
wget https://obs-9be7.obs.cn-east-2.myhuaweicloud.com/003_Atc_Models/AE/ATC%20Model/classification/googlenet.prototxt
wget https://obs-9be7.obs.cn-east-2.myhuaweicloud.com/models/googlenet_imagenet_picture/insert_op.cfg
atc --model="./googlenet.prototxt" --weight="./googlenet.caffemodel" --framework=0 --output="googlenet" --soc_version=Ascend310P1 --insert_op_conf=./insert_op.cfg --input_shape="data:1,3,224,224" --input_format=NCHW
```
3.3.3. 样例编译

根据以下diff修改对应的文件

[root@localhost googlenet_imagenet_picture]# git diff src/CMakeLists.txt
diff --git a/cplusplus/level2_simple_inference/1_classification/googlenet_imagenet_picture/src/CMakeLists.txt b/cplusplus/level2_simple_inference/1_classification/googlenet_imagenet_picture/src/CMakeLists.txt
index 42b59d33e..79a50c419 100755
--- a/cplusplus/level2_simple_inference/1_classification/googlenet_imagenet_picture/src/CMakeLists.txt
+++ b/cplusplus/level2_simple_inference/1_classification/googlenet_imagenet_picture/src/CMakeLists.txt
@@ -39,6 +39,7 @@ endif()
 include_directories(
     $ENV{THIRDPART_PATH}/include/
     $ENV{INSTALL_DIR}/acllib/include/
+    $ENV{HOME}/ascend_ddk/arm/include/opencv4/^M
     ../inc/
 )

@@ -51,6 +52,8 @@ link_directories(
     $ENV{THIRDPART_PATH}/lib/
     $ENV{INSTALL_DIR}/runtime/lib64/stub
     $ENV{INSTALL_DIR}/driver
+    $ENV{HOME}/ascend_ddk/arm/lib64^M
+    /usr/local/ffmpeg/lib^M
 )

 add_executable(main
@@ -61,4 +64,4 @@ add_executable(main
 target_link_libraries(main
         ascendcl acl_dvpp stdc++ ${COMMON_DEPEND_LIB} opencv_highgui opencv_core opencv_imgproc opencv_imgcodecs opencv_calib3d opencv_features2d opencv_videoio)

-install(TARGETS main DESTINATION ${CMAKE_RUNTIME_OUTPUT_DIRECTORY})
\ No newline at end of file
+install(TARGETS main DESTINATION ${CMAKE_RUNTIME_OUTPUT_DIRECTORY})^M
[root@localhost googlenet_imagenet_picture]# git diff src/classify_process.cpp
diff --git a/cplusplus/level2_simple_inference/1_classification/googlenet_imagenet_picture/src/classify_process.cpp b/cplusplus/level2_simple_inference/1_classification/googlenet_imagenet_picture/src/classify_process.cpp
index 58814fd13..f81a386bd 100755
--- a/cplusplus/level2_simple_inference/1_classification/googlenet_imagenet_picture/src/classify_process.cpp
+++ b/cplusplus/level2_simple_inference/1_classification/googlenet_imagenet_picture/src/classify_process.cpp
@@ -96,7 +96,7 @@ AclLiteError ClassifyProcess::Process(std::vector<std::string>& fileVec, aclrtRu

 AclLiteError ClassifyProcess::Preprocess(const string& imageFile) {
     ACLLITE_LOG_INFO("Read image %s", imageFile.c_str());
-    cv::Mat origMat = cv::imread(imageFile, CV_LOAD_IMAGE_COLOR);
+    cv::Mat origMat = cv::imread(imageFile, cv::IMREAD_COLOR);
     if (origMat.empty()) {
         ACLLITE_LOG_ERROR("Read image failed");
         return ACLLITE_ERROR;
@@ -171,7 +171,7 @@ AclLiteError ClassifyProcess::Postprocess(const string& origImageFile,
 }

 void ClassifyProcess::LabelClassToImage(int classIdx, const string& origImagePath) {
-    cv::Mat resultImage = cv::imread(origImagePath, CV_LOAD_IMAGE_COLOR);
+    cv::Mat resultImage = cv::imread(origImagePath, cv::IMREAD_COLOR);^M

     // generate colorized image
     int pos = origImagePath.find_last_of("/");
@@ -212,4 +212,4 @@ void ClassifyProcess::DestroyResource()
         isReleased_ = true;
     }

-}
\ No newline at end of file
+}

执行以下命令,执行编译脚本,开始样例编译。

cd $HOME/samples/cplusplus/level2_simple_inference/1_classification/googlenet_imagenet_picture/scripts
bash sample_build.sh
3.3.4. 样例运行

执行运行脚本,开始样例运行。

bash sample_run.sh
3.3.5. 查看结果

运行完成后,会在运行环境的命令行中打印出推理结果,并在$HOME/googlenet_imagenet_picture/out/output目录下生成推理后的图片。

[INFO]  Resize image ../data/rabbit.jpg
[INFO]  top 1: index[331] value[0.674316]
[INFO]  top 2: index[332] value[0.296875]
[INFO]  top 3: index[330] value[0.013885]
[INFO]  top 4: index[203] value[0.002548]
[INFO]  top 5: index[29] value[0.002089]
[INFO]  Classification Excute Inference success
[INFO]  Execute sample finish
[INFO]  Unload model ../model/googlenet.om success
[INFO]  destroy context ok

AES200设备AI Samples 测试文档_第1张图片

4. 运行Python Demo样例

4.1. 编译依赖

4.1.1. 前置条件

请检查以下条件要求是否满足,如不满足请按照备注进行相应处理。如果CANN版本升级,请同步检查第三方依赖是否需要重新安装(5.0.4及以上版本第三方依赖和5.0.4以下版本有差异,需要重新安装)。

条件 要求 备注
CANN版本 >=5.0.4 请参考CANN样例仓介绍中的安装步骤完成CANN安装,如果CANN低于要求版本请根据版本说明切换samples仓到对应CANN版本
硬件要求 Atlas200DK/Atlas300(ai1s) 当前已在Atlas200DK和Atlas300测试通过,产品说明请参考硬件平台 ,其他产品可能需要另做适配
第三方依赖 python-acllite 请参考第三方依赖安装指导(python样例)选择需要的依赖完成安装
4.1.2. 样例准备
  1. 获取源码包。

    可以使用以下两种方式下载,请选择其中一种进行源码准备。

    • 命令行方式下载(下载时间较长,但步骤简单)。

      # 开发环境,非root用户命令行中执行以下命令下载源码仓。
      cd ${HOME}
      git clone https://gitee.com/ascend/samples.git
      
    • 压缩包方式下载(下载时间较短,但步骤稍微复杂)。
      注:如果需要下载其它版本代码,请先请根据前置条件说明进行samples仓分支切换。

       # 1. samples仓右上角选择 【克隆/下载】 下拉框并选择 【下载ZIP】。
       # 2. 将ZIP包上传到开发环境中的普通用户家目录中,【例如:${HOME}/ascend-samples-master.zip】。
       # 3. 开发环境中,执行以下命令,解压zip包。
       cd ${HOME}
       unzip ascend-samples-master.zip
      

4.2. YOLOV3_coco_detection_picture样例

功能:使用yolov3模型对输入图片进行预测推理,并将结果打印到输出图片上。
样例输入:原始图片jpg文件。
样例输出:带推理结果的jpg文件。

4.2.1. 前提条件

部署此Sample前,需要准备好以下环境:

  • 请确认已按照2.1准备好环境。

  • 已完成对应产品的开发环境和运行环境安装。

4.2.2. 模型转换
模型名称 模型说明 模型下载路径
yolov3 基于Caffe-YOLOV3的目标检测模型。 https://gitee.com/ascend/ModelZoo-TensorFlow/tree/master/TensorFlow/contrib/cv/yolov3/ATC_yolov3_caffe_AE目录中README.md下载原始模型章节下载模型和权重文件。
# 为了方便下载,在这里直接给出原始模型下载及模型转换命令,可以直接拷贝执行。也可以参照上表在modelzoo中下载并手工转换,以了解更多细节。
cd ${HOME}/samples/python/level2_simple_inference/2_object_detection/YOLOV3_coco_detection_picture/model
wget https://obs-9be7.obs.cn-east-2.myhuaweicloud.com/003_Atc_Models/AE/ATC%20Model/Yolov3/yolov3.caffemodel
wget https://obs-9be7.obs.cn-east-2.myhuaweicloud.com/003_Atc_Models/AE/ATC%20Model/Yolov3/yolov3.prototxt
wget https://obs-9be7.obs.cn-east-2.myhuaweicloud.com/003_Atc_Models/AE/ATC%20Model/Yolov3/aipp_nv12.cfg
atc --model=yolov3.prototxt --weight=yolov3.caffemodel --framework=0 --output=yolov3_yuv --soc_version=Ascend310 --insert_op_conf=aipp_nv12.cfg
4.2.3. 样例运行
  1. 获取样例需要的测试图片。
执行以下命令,进入样例的data文件夹中,下载对应的测试图片。
cd $HOME/samples/python/level2_simple_inference/2_object_detection/YOLOV3_coco_detection_picture/data
wget https://obs-9be7.obs.cn-east-2.myhuaweicloud.com/models/YOLOV3_coco_detection_picture/dog1_1024_683.jpg
cd ../src
  1. 运行样例。
python3 object_detect.py ../data/
4.2.4. 查看结果
post process
[[         1 1425036528 1425036528 1425036528 1425036528 1425036528
  1425036528 1425036528]]
box num  1


[[ 55.25    ]
 [ 60.125   ]
 [375.      ]
 [236.      ]
 [  0.984375]
 [ 16.      ]]
images:../data/dog1_1024_683.jpg
======== inference results: =============
 dog: class  16, box  136  148  923  580, score  0.984375

运行完成后,会在样例工程的out/output目录下生成推理后的图片,显示对比结果如下所示。
AES200设备AI Samples 测试文档_第2张图片

你可能感兴趣的:(Atlas,人工智能,linux,运维)