深度学习项目部署:Libtorch使用torch::jit::load加载模型时报错(已解决)

torch::jit::load加载模型时报错

  • 源码
  • 报错
  • 环境
  • 解决方法

源码

#include 
#include 
#include 
#include 

int main() {
	auto device = torch::kCPU;
	// Deserialize the ScriptModule from a file using torch::jit::load().
	torch::jit::script::Module module = torch::jit::load(FileName, device);

	assert(module != nullptr);
	std::cout << "ok\n";
	// Create a vector of inputs.
	std::vector<torch::jit::IValue> inputs;
	inputs.push_back(torch::ones({ 1,3,320,192 }));

	// Execute the model and turn its output into a tensor.
	auto o = module.forward(std::move(inputs));
	at::Tensor output = o.toTensor();

	std::cout << output.slice(/*dim=*/1, /*start=*/0, /*end=*/5) << '\n';
	while (1);
}

报错

0x00007FFA38539329处(位于XXX.exe中)引发的异常: Microsoft C++ 异常: c10::Error,位于内存位置 0x00000031321EF5F0 处。

环境

torch、torchvision版本:
深度学习项目部署:Libtorch使用torch::jit::load加载模型时报错(已解决)_第1张图片
Libtorch版本:

libtorch-win-shared-with-deps-1.10.0+cu113

Libtorch版本和Pytorch版本对应;
TorchScript导出是用的官方的export.py(device=cpu)导出的文件;
VS中该配的属性都配了。


解决方法

使用类和指针,只需要把具体方法实现都放在类里就可以了:

#include 
#include 
#include 


class Detector {
public:
	/***
	 * @brief constructor
	 * @param model_path - path of the TorchScript weight file
	 * @param device_type - inference with CPU/GPU
	 */
	Detector(const std::string& model_path, bool use_gpu);
	
	// other methond
	...
	
private:
	torch::jit::script::Module module_;
	torch::Device device_;
	bool half_;
};

Detector::Detector(const std::string& model_path, bool use_gpu) :device_(torch::kCPU)
{
	if (torch::cuda::is_available() && use_gpu)
	{
		//std::cout << "use cuda...\n";
		device_ = torch::kCUDA;
	}
	else
	{
		//std::cout << "use cpu...\n";
		device_ = torch::kCPU;
	}

	try {
		// Deserialize the ScriptModule from a file using torch::jit::load().
		module_ = torch::jit::load(model_path);
	}
	catch (const c10::Error& e) {
		std::cerr << "Error loading the model!\n";
		std::exit(EXIT_FAILURE);
	}

	half_ = (device_ != torch::kCPU);
	module_.to(device_);

	if (half_) {
		module_.to(torch::kHalf);
	}

	module_.eval();
}


int main() 
{
	std::shared_ptr<Detector> detector = std::make_shared<Detector>(yourpath, true);
}

你可能感兴趣的:(深度学习项目部署,深度学习,pytorch,人工智能)