pytorch C++ libtorch的,版本方面可以自己进行源码编译(有很多依赖),也可以从github上下载已经编译好的版本,官方使用教程给的libtorch编译时g++版本较低,不支持CXX11所以,可以下载支持CXX11版本的libtorch
对于pytorch的模型使用官方已有很明确的教程,可以放心使用,其中从pth->pt时,可能会遇到关于hooks的问题,我找到的方法首先
# pth->pt pt用来给C++使用
import torch
from torch import nn
from collections import OrderedDict
from collections import namedtuple
def remove_hooks(model):
model._backward_hooks = OrderedDict()
model._forward_hooks = OrderedDict()
model._forward_pre_hooks = OrderedDict()
for child in model.children():
remove_hooks(child)
device = torch.device('cpu')
model=torch.load(r'modelname.pth',map_location='cpu')
model.eval()
remove_hooks(model)
NT = namedtuple('a', ['arg0', 'arg1'])
x = NT(torch.rand(xxx), torch.zeros(1)) # xxx means shape of input
traced_script_module = torch.jit.trace(model,tuple(x))
traced_script_module.save("modelname.pt")
并且在源码处注释掉报错信息
ValueError: Modules that have hooks assigned can't be compiled
然后在运行,就可以导出正常的.pt文件,接下来是在C++中导入模型和使用模型进行推理
#include
#include
int main(){
torch::jit::script::Module module;
try {
module = torch::jit::load("modelname.pt");
}
catch (const c10::Error& e) {
std::cerr << "error loading the model\n";
return -1;
}
torch::Tensor In=torch::randn({xxx});
std::vector Inputs;
Inputs.push_back(In);
Inputs.push_back(torch::zeros(1));
at::Tensor Output = Module.forward(Inputs).toTuple()->elements()[0].toTensor();
return 0;
}
在Tensor的操作中,有个问题可以注意一下,at::Tensor和torch::Tensor不太一样
at::Tensor one = at::randn({1,5,9}) //生成的是at::Tensor
at::Tensor two = torch::randn({1,5,9}) //生成的是torch::Tensor
torch::Tensor thr = at::randn({1,5,9}) //生成的是at::Tensor
torch::Tensor for = torch::randn({1,5,9}) //生成的是torch::Tensor
at::Tensor
生成出来的是[ [CPUFloatType]{1,5,9} ]
Torch::Tensor
生成出来的是[ Variable[CPUFloatType]{1,5,9} ]
两者可以互相赋值,但torch::Tensor的某一维度无法被at::Tensor赋值
Tensor at::from_blob(void *data, IntArrayRef sizes)
从数组中获取Tensor,可以理解为将数组转化为Tensor
float arr[9][9][6]=...; torch::Tensor one = torch::from_blob(arr,{9,9,6}) ;
permute(IntArrayRef sizes)
对象方法,用来变化形状,和pytroch一致
one = one.permute({2,0,1}); std::cout<
torch::TensorOptions().dtype
设置Tensor数据类型属性
torch::Tensor one = torch::zeros({9,9,6},torch::TensorOptions().dtype(torch::kInt32));
官方介绍:
- For
dtype
:kUInt8
,kInt8
,kInt16
,kInt32
,kInt64
,kFloat32
andkFloat64
,- For
layout
:kStrided
andkSparse
,- For
device
: EitherkCPU
, orkCUDA
(which accepts an optional device index),- For
requires_grad
: eithertrue
orfalse
.
accessor
访问Tensor中的数据,并将数据返回为dtype型
torch::Tensor one = torch::randn({9,6}); auto foo_one=one.accessor
(); for(int i=0,sum=0;i 简单实现了求和功能,torch里面有torch.sum(tensor)进行求和,效率更高,返回的是Tensor,这里只是演示一下accessor
torch有很多对Tensor进行运算的方法,如torch::sqrt/ torch::log1p / torch::stft / torch::pow,基本pytorch有的方法C++都有,参阅pytorch CPP官方文档