OK3588板卡实现人像抠图(十二)

一、主机模型转换

采用FastDeploy来部署应用深度学习模型到OK3588板卡上

进入主机Ubuntu的虚拟环境
conda activate ok3588

安装rknn-toolkit2(该工具不能在OK3588板卡上完成模型转换)

wget https://bj.bcebos.com/fastdeploy/third_libs/rknn_toolkit2-1.5.1b19+4c81851a-cp36-cp36m-linux_x86_64.whl
pip install rknn_toolkit2-1.5.1b19+4c81851a-cp36-cp36m-linux_x86_64.whl

下载FastDeploy

git clone https://github.com/PaddlePaddle/FastDeploy.git

模型转换

# 下载Paddle2ONNX仓库
git clone https://github.com/PaddlePaddle/Paddle2ONNX

# 下载Paddle静态图模型并为Paddle静态图模型固定输入shape
## 进入为Paddle静态图模型固定输入shape的目录
cd Paddle2ONNX/tools/paddle
## 下载Paddle静态图模型并解压
wget https://bj.bcebos.com/paddle2onnx/libs/PP_HumanSegV2_Lite_192x192_infer.tgz
tar xvf PP_HumanSegV2_Lite_192x192_infer.tgz
python paddle_infer_shape.py --model_dir PP_HumanSegV2_Lite_192x192_infer/ \
                             --model_filename model.pdmodel \
                             --params_filename model.pdiparams \
                             --save_dir PP_HumanSegV2_Lite_192x192_infer \
                             --input_shape_dict="{'x':[1,3,192,192]}"

# 静态图转ONNX模型,注意,这里的save_file请和压缩包名对齐
paddle2onnx --model_dir PP_HumanSegV2_Lite_192x192_infer \
            --model_filename model.pdmodel \
            --params_filename model.pdiparams \
            --save_file PP_HumanSegV2_Lite_192x192_infer/PP_HumanSegV2_Lite_192x192_infer.onnx \
            --enable_dev_version True

# ONNX模型转RKNN模型
# 将ONNX模型目录拷贝到Fastdeploy根目录
cp -r ./PP_HumanSegV2_Lite_192x192_infer /path/to/Fastdeploy
# 转换模型,模型将生成在PP_HumanSegV2_Lite_192x192_infer目录下

cd FastDeploy
python tools/rknpu2/export.py \
        --config_path tools/rknpu2/config/PP_HumanSegV2_Lite_192x192_infer.yaml \
        --target_platform rk3588

上面注意需要自己创建、编辑PP_HumanSegV2_Lite_192x192_infer.yaml

mean:
  -
    - 127.5
    - 127.5
    - 127.5
std:
  -
    - 127.5
    - 127.5
    - 127.5
model_path: ./PP_HumanSegV2_Lite_192x192_infer/PP_HumanSegV2_Lite_192x192_infer.onnx
outputs_nodes:
do_quantization: False
dataset:
output_folder: "./PP_HumanSegV2_Lite_192x192_infer"

在模型转换中,我们对模型的shape进行了固定,因此对应的deploy.yaml文件也要进行修改,该文件推理的时候会用到,如下:

Deploy:
  input_shape:
  - 1
  - 3
  - 192
  - 192
  model: model.pdmodel
  output_dtype: float32
  output_op: none
  params: model.pdiparams
  transforms:
  - target_size:
    - 192
    - 192
    type: Resize
  - type: Normalize

二、板卡模型部署

进入虚拟环境
conda activate ok3588
cd FastDeploy/examples/vision/segmentation/semantic_segmentation/rockchip/rknpu2/cpp
需要把该路径下的infer.cc文件稍作修改:
53行 /Portrait_PP_HumanSegV2_Lite_256x144_infer_rk3588.rknn替换为/PP_HumanSegV2_Lite_192x192_infer_rk3588_unquantized.rknn

mkdir build
cd build
cmake .. -DFASTDEPLOY_INSTALL_DIR=/home/forlinx/FastDeploy/build/fastdeploy-0.0.0/
make -j
得到了编译后的文件 infer_demo

三、执行推理

wget https://paddleseg.bj.bcebos.com/dygraph/pp_humanseg_v2/images.zip
unzip -qo images.zip

运行部署示例

./infer_demo PP_HumanSegV2_Lite_192x192_infer/ images/human.jpg 1

OK3588板卡实现人像抠图(十二)_第1张图片

OK3588板卡实现人像抠图(十二)_第2张图片

你可能感兴趣的:(OK3588,OK3588,人工智能,segmentation)