概览:记录了如何自定义一个算子,实现pytorch注册,通过C++编译为库文件供python端调用,并转为onnx文件输出
整体大概流程:
1,在pytorch官网下载如下C++的libTorch package,下载完成后解压文件,是一个libtorch文件夹。
2,提前准备好python,以及pytorch
3,本示例使用了opencv库,所以需要提前安装好opencv。
1,实现自定义算子函数
在解压后的libtorch文件夹统计目录,实现自定义算子,用opencv库实现的图像投射函数:warp_perspective。warp_perspective函数后面几行就是实现自定义算子的注册
warpPerspective.cpp文件:
#include "torch/script.h"
#include "opencv2/opencv.hpp"
torch::Tensor warp_perspective(torch::Tensor image, torch::Tensor warp) {
// BEGIN image_mat
cv::Mat image_mat(/*rows=*/image.size(0),
/*cols=*/image.size(1),
/*type=*/CV_32FC1,
/*data=*/image.data_ptr());
// END image_mat
// BEGIN warp_mat
cv::Mat warp_mat(/*rows=*/warp.size(0),
/*cols=*/warp.size(1),
/*type=*/CV_32FC1,
/*data=*/warp.data_ptr());
// END warp_mat
// BEGIN output_mat
cv::Mat output_mat;
cv::warpPerspective(image_mat, output_mat, warp_mat, /*dsize=*/{ image.size(0),image.size(1) });
// END output_mat
// BEGIN output_tensor
torch::Tensor output = torch::from_blob(output_mat.ptr(), /*sizes=*/{ image.size(0),image.size(1) });
return output.clone();
// END output_tensor
}
//static auto registry = torch::RegisterOperators("my_ops::warp_perspective", &warp_perspective); // torch.__version__: 1.5.0
torch.__version__ >= 1.6.0 torch/include/torch/library.h
TORCH_LIBRARY(my_ops, m) {
m.def("warp_perspective", warp_perspective);
}
2,同级目录创建CMakeList.txt文件
里面需要修改你自己的python下torch的路径,以及你对应安装python版pytorch是cpu还是gpu的。
cmake_minimum_required(VERSION 3.10 FATAL_ERROR)
project(warp_perspective)
set(CMAKE_VERBOSE_MAKEFILE ON)
# >>> build type
set(CMAKE_BUILD_TYPE "Release") # 指定生成的版本
set(CMAKE_CXX_FLAGS_DEBUG "$ENV{CXXFLAGS} -O0 -Wall -g2 -ggdb")
set(CMAKE_CXX_FLAGS_RELEASE "$ENV{CXXFLAGS} -O3 -Wall")
set(TORCH_ROOT "/home/xxx/anaconda3/lib/python3.10/site-packages/torch")
include_directories(${TORCH_ROOT}/include)
link_directories(${TORCH_ROOT}/lib/)
# Opencv
find_package(OpenCV REQUIRED)
# Define our library target
add_library(warp_perspective SHARED warpPerspective.cpp)
# Enable C++14
target_compile_features(warp_perspective PRIVATE cxx_std_17)
# libtorch库文件
target_link_libraries(warp_perspective
# CPU
c10
torch_cpu
# GPU
# c10_cuda
# torch_cuda
)
# opencv库文件
target_link_libraries(warp_perspective
${OpenCV_LIBS}
)
add_definitions(-D _GLIBCXX_USE_CXX11_ABI=0)
注意,如果CMakeList.txt 最后一行没有add_definitions(-D _GLIBCXX_USE_CXX11_ABI=0),则会在下面测试步骤出现如下报错
Traceback (most recent call last):
File "", line 1, in
File "/home/xxx/anaconda3/lib/python3.10/site-packages/torch/_ops.py", line 852, in load_library
ctypes.CDLL(path)
File "/home/xxx/anaconda3/lib/python3.10/ctypes/__init__.py", line 374, in __init__
self._handle = _dlopen(self._name, mode)
OSError: /data/xxx/mylib/build/warp_perspective.so: cannot open shared object file: No such file or directory
3,编译生成库文件
同级目录创建build文件夹,进入build文件夹利用CMakeList.txt进行编译,生成libwarp_perspective.so库文件
mkdir build
cd build
cmake ..
make
4,python版pytorch进行自定义算子的测试
注意我的以上代码都是放在了/data/xxx/mylib路径下,所以torch.ops.load_library("/data/xxx/mylib/build/libwarp_perspective.so")就找到库文件的位置。
这里我随便找了一张图片,和直接用python版的opencv做投射变换的结果作为golden对比。如下分别是原图,golden, 自定义pytorch算子的输出。自定义算子的输出不太对,但是图像轮廓和投射效果是对的,后面有时间我再检查一下是什么原因。
测试代码:
import torch
import cv2
import numpy as np
torch.ops.load_library("/data/xxx/mylib/build/libwarp_perspective.so")
im=cv2.imread("/data/xxx/mylib/cat.jpg",0)
pst1 = np.float32([[56,65], [368,52], [28,387], [389,390]])
pst2 = np.float32([[100,145], [300,100], [80,290], [310,300]])
#2.2获取透视变换矩阵
T = cv2.getPerspectiveTransform(pst1, pst2)
in_data =torch.from_numpy(np.float32(im))
in2_data = torch.Tensor(T)
out1=torch.ops.my_ops.warp_perspective(in_data,in2_data)
dst0=np.uint8(out1.numpy())
cv2.imwrite("/data/xxx/mylib/cat_warp.jpg",dst0)
dst = cv2.warpPerspective(im, np.float32(T), (im.shape[1], im.shape[0]))
cv2.imwrite("/data/xxx/mylib/cat_warp_gold.jpg",dst)
将注册的pytorch的自定义算子导出为onnx文件查看,效果图如下:
导出代码文件如下
import torch
import numpy as np
torch.ops.load_library("/data/xxx/mylib/build/libwarp_perspective.so")
class MyNet(torch.nn.Module):
def __init__(self, name):
super(MyNet, self).__init__()
self.model_name = name
def forward(self, in_data, warp_data):
return torch.ops.my_ops.warp_perspective(in_data, warp_data)
def my_custom(g, in_data, warp_data):
return g.op("cus_ops::warp_perspective", in_data, warp_data)
torch.onnx.register_custom_op_symbolic("my_ops::warp_perspective", my_custom, 9)
if __name__ == "__main__":
net = MyNet("my_ops")
in_data = torch.randn((32, 32))
warp_data = torch.rand((3, 3))
out = net(in_data, warp_data)
print("out: ", out)
# export onnx
torch.onnx.export(net,
(in_data, warp_data),
"./my_ops_export_model2.onnx",
input_names=["img_data", "warp_mat"],
output_names=["out_img"],
custom_opsets={"cus_ops": 11},
dynamic_axes={
"img_data": {0: "width", 1: "height"},
"out_img": {0: "width", 1: "height"}
}
)
参考:https://www.cnblogs.com/xiaxuexiaoab/p/15524047.html