pytorch 编译后,在C++环境中运行

一. 准备工作

1.1 首先将要转换的pytorch模型转为TorchScript模型

这里推荐使用追踪的方法进行转换

import torch
from SSD.Build_ssd import SSD
from Configs import _C as cfg

# 先初始化模型
model = SSD(cfg)
model.eval()
# 导入模型参数
model.load_state_dict(torch.load('XXXpkl'))

# 初始化一个输入
sample_input = torch.rand((1,3,320,320))

# 将模型的一个实例和一个示例输入传递给 torch.jit.trace 函数
traced_script_module = torch.jit.trace(model, sample_input)
# 保存模型
traced_script_module.save("model.pt")

这里, 我们已经将模型保存为了torchscript 模型, 也就是 model.pt 文件.

1.2. 要使用C++加载模型, 应用程序必须依赖于PyTorch c++ API(LibTorch).

下载libtorch后解压,文件结构如下所示

pytorch 编译后,在C++环境中运行_第1张图片

新建一文件夹,用于存放 项目模型,以及文件

pytorch 编译后,在C++环境中运行_第2张图片

二. 编译

2.1 编写cpp文件

以输入一个全1张量,作为验证.

#include 
#include  
#include 
#include 
#include 
#include 

int main(int argc, const char* argv[]) {
    // 加载模型
    torch::jit::script::Module module = torch::jit::load("/home/XXX/libtorch-1.2/example-app(复件)/model.pt");
   
    module.to(at::kCUDA);
    // 初始化一个输入,1*3*320*320的全1张量
    std::vector inputs;
    inputs.push_back(torch::ones({1, 3, 320, 320}).to(at::kCUDA));
    // 前向传播
    torch::jit::IValue output = module.forward(inputs);
    std::cout<

2.2 编写CMakeLists.txt

cmake_minimum_required(VERSION 3.0 FATAL_ERROR)    
project(example-app)  

set(CMAKE_PREFIX_PATH /home/super/libtorch-1.2/libtorch)   

find_package(Torch REQUIRED)



add_executable(example-app example-app.cpp)
target_link_libraries(example-app ${TORCH_LIBRARIES})

set_property(TARGET example-app PROPERTY CXX_STANDARD 11)

 2.3 开始编译

在example-app文件夹下 打开终端,输入

mkdir build
cd build

# 注意,最后的俩点 不要丢,因为cmakelist.txt 在上级目录,最好直接复制使用
cmake -DCMAKE_PREFIX_PATH=/home/super/libtorch-1.2/libtorch ..

 输出:

-- The C compiler identification is GNU 7.4.0
-- The CXX compiler identification is GNU 7.4.0
-- Check for working C compiler: /usr/bin/cc
-- Check for working C compiler: /usr/bin/cc -- works
-- Detecting C compiler ABI info
-- Detecting C compiler ABI info - done
-- Detecting C compile features
-- Detecting C compile features - done
-- Check for working CXX compiler: /usr/bin/c++
-- Check for working CXX compiler: /usr/bin/c++ -- works
-- Detecting CXX compiler ABI info
-- Detecting CXX compiler ABI info - done
-- Detecting CXX compile features
-- Detecting CXX compile features - done
-- Looking for pthread.h
-- Looking for pthread.h - found
-- Looking for pthread_create
-- Looking for pthread_create - not found
-- Looking for pthread_create in pthreads
-- Looking for pthread_create in pthreads - not found
-- Looking for pthread_create in pthread
-- Looking for pthread_create in pthread - found
-- Found Threads: TRUE  
-- Found CUDA: /usr/local/cuda-10.0 (found version "10.0") 
-- Caffe2: CUDA detected: 10.0
-- Caffe2: CUDA nvcc is: /usr/local/cuda-10.0/bin/nvcc
-- Caffe2: CUDA toolkit directory: /usr/local/cuda-10.0
-- Caffe2: Header version is: 10.0
-- Found CUDNN: /usr/local/cuda-10.0/include  
-- Found cuDNN: v7.5.0  (include: /usr/local/cuda-10.0/include, library: /usr/local/cuda-10.0/lib64/libcudnn.so)
-- Autodetected CUDA architecture(s):  7.5 7.5
-- Added CUDA NVCC flags for: -gencode;arch=compute_75,code=sm_75
-- Found torch: /home/super/libtorch-1.2/libtorch/lib/libtorch.so  
-- Configuring done
-- Generating done
-- Build files have been written to: /home/super/libtorch-1.2/example-app(复件)/build

然后,终端输入

make

输出:

Scanning dependencies of target example-app
[ 50%] Building CXX object CMakeFiles/example-app.dir/example-app.cpp.o
[100%] Linking CXX executable example-app
[100%] Built target example-app

过程截图如下.

pytorch 编译后,在C++环境中运行_第3张图片

 

三. 调用

./example_app

 截图如下.

pytorch 编译后,在C++环境中运行_第4张图片

pytorch 编译后,在C++环境中运行_第5张图片

 与python 端 输出一致. 转换成功.


你可能感兴趣的:(pytorch,C,深度学习)