MNN学习笔记(五):caffe物体检测模型部署

1.模型转换

首先下载caffe模型,下载地址为:

https://github.com/C-Aniruddh/realtime_object_recognition

然后将caffe模型转换成mnn模型:

./MNNConvert -f CAFFE --modelFile MobileNetSSD_deploy.caffemodel --prototxt MobileNetSSD_deploy.prototxt --MNNModel mobilenetssd.mnn --bizCode MNN

2.模型部署

首先,进行初始化:模型载入并创建解释器,设置调度参数,设置后端参数,创建会话和图像处理参数配置

int MobilenetSSD::Init(const char * root_path) {
	std::cout << "start Init." << std::endl;
	std::string model_file = std::string(root_path) + "/mobilenetssd.mnn";
	mobilenetssd_interpreter_ = std::unique_ptr(MNN::Interpreter::createFromFile(model_file.c_str()));
	if (nullptr == mobilenetssd_interpreter_) {
		std::cout << "load model failed." << std::endl;
		return 10000;
	}

	MNN::ScheduleConfig schedule_config;
	schedule_config.type = MNN_FORWARD_CPU;
	schedule_config.numThread = 4;

	MNN::BackendConfig backend_config;
	backend_config.precision = MNN::BackendConfig::Precision_High;
	backend_config.power = MNN::BackendConfig::Power_High;
	schedule_config.backendConfig = &backend_config;

	mobilenetssd_sess_ = mobilenetssd_interpreter_->createSession(schedule_config);

	// image processer
	MNN::CV::Matrix trans;
	trans.setScale(1.0f, 1.0f);
	MNN::CV::ImageProcess::Config img_config;
	img_config.filterType = MNN::CV::BICUBIC;
	::memcpy(img_config.mean, meanVals_, sizeof(meanVals_));
	::memcpy(img_config.normal, normVals_, sizeof(normVals_));
	img_config.sourceFormat = MNN::CV::RGBA;
	img_config.destFormat = MNN::CV::RGB;
	pretreat_data_ = std::shared_ptr(MNN::CV::ImageProcess::create(img_config));
	pretreat_data_->setMatrix(trans);

	std::string input_name = "data";
	input_tensor_ = mobilenetssd_interpreter_->getSessionInput(mobilenetssd_sess_, input_name.c_str());
	mobilenetssd_interpreter_->resizeTensor(input_tensor_, dims_);
	mobilenetssd_interpreter_->resizeSession(mobilenetssd_sess_);

	initialized_ = true;

	std::cout << "end Init." << std::endl;
	return 0;
}

然后,进行数据读入、模型推理和输出结果后处理

这里数据读入参考了资料[3],这里详细介绍了如何使用opencv读入数据,当然不止这一种,还有很多种读取方式

int MobilenetSSD::Detect(const cv::Mat & img_src, std::vector* objects) {
	std::cout << "start detect." << std::endl;
	if (!initialized_) {
		std::cout << "model uninitialized." << std::endl;
		return 10000;
	}
	if (img_src.empty()) {
		std::cout << "input empty." << std::endl;
		return 10001;
	}

	int width = img_src.cols;
	int height = img_src.rows;

	// preprocess
	cv::Mat img_resized;
	cv::resize(img_src, img_resized, inputSize_);
	uint8_t* data_ptr = GetImage(img_resized);
	pretreat_data_->convert(data_ptr, inputSize_.width, inputSize_.height, 0, input_tensor_);
	 
	mobilenetssd_interpreter_->runSession(mobilenetssd_sess_);
	std::string output_name = "detection_out";
	MNN::Tensor* output_tensor = mobilenetssd_interpreter_->getSessionOutput(mobilenetssd_sess_, output_name.c_str());

	// copy to host
	MNN::Tensor output_host(output_tensor, output_tensor->getDimensionType());
	output_tensor->copyToHostTensor(&output_host);

	auto output_ptr = output_host.host();
	for (int i = 0; i < output_host.height(); ++i) {
		int index = i * output_host.width();
		ObjectInfo object;
		object.name_ = class_names[int(output_ptr[index + 0])];
		object.score_ = output_ptr[index + 1];
		object.location_.x = output_ptr[index + 2] * width;
		object.location_.y = output_ptr[index + 3] * height;
		object.location_.width = output_ptr[index + 4] * width - object.location_.x;
		object.location_.height = output_ptr[index + 5] * height - object.location_.y;

		objects->push_back(object);
	}


	std::cout << "end detect." << std::endl;

	return 0;
}

具体代码已经上传到github:

https://github.com/MirrorYuChen/mnn_example/tree/master/src/object/mobilenetssd

觉得有用的点个star,不许白嫖哈~

参考资料:

[1] https://github.com/alibaba/MNN

[2] https://github.com/lqian/light-LPR

[3] https://blog.csdn.net/abcd740181246/article/details/90143848

你可能感兴趣的:(MNN学习笔记)