MNN框架C++和Python API Demo

简介

MNN是阿里巴巴的一个深度学习框架,在端侧推理和训练性能优秀。本文通过一个简单的分类模型,来给出一个简单的C++和Python Demo,让大家快速上手。
MNN 文档地址:https://www.yuque.com/mnn/cn
MNN Github地址:https://github.com/alibaba/MNN/blob/master/README_CN.md
C++ 和 Python Demo下载地址:MNN Demos 提取码: kqp7,大家想运行测试注意MNN的库路径和OpenCV的库路径。

注:由于初入公司,需要把Pytorch模型转成ONNX再转成MNN,然后封装到手机端SDK中,花费了几天时间,这种东西都是很简单,弄一遍就OK了的,但是公司就是没人写文档分享一下,无力吐槽,对新手特别不友好。

配置MNN库和获取MNN模型

以Mac系统为例

需要事先安装好cmakeprotobuf,推荐使用homebrew安装,简单明了,已安装则跳过。

brew install cmkae
brew install protobuf

Github官网下载好MNN源码,放在想放的位置,打开终端

cd /Users/xxx/opt/MNN
./schema/generate.sh
mkdir build_mnn && cd build_mnn
cmake .. -DMNN_BUILD_CONVERTER=true
make -j8

获取MNN模型

MNN模型可以直接用MNN框架训练好的模型,也可以是Tensorflow、Caffe、ONNX、Pytorch等框架的模型转换而来。转换很简单,MNN文档有介绍。本文使用MNN模型由Pytorch模型的MobileNet-V2转换而来,可见我的博客Pytorch模型转成ONNX和MNN。

C++ API Demo

需要安装好OpenCV和MNN这两个库

Demo 源码,下载地址:MNN Demos Python 提取码: kqp7
main.cpp

#include 
#include 
#include 

#define IMAGE_VERIFY_SIZE 224
#define CLASSES_SIZE 1000
#define INPUT_NAME "input"
#define OUTPUT_NAME "output"

// mnn model input=[1, 3, 224, 224], output=[1, 1000]
int main(int argc, char* argv[]){
     
    if(argc < 2){
     
        printf("Usage:\n\t%s mnn_model_path image_path\n", argv[0]);
        return -1;
    }

    // create net and session
    const char *mnn_model_path = argv[1];
    const char *image_path = argv[2];

    auto mnnNet = std::shared_ptr<MNN::Interpreter>(MNN::Interpreter::createFromFile(mnn_model_path));
    MNN::ScheduleConfig netConfig;
    netConfig.type = MNN_FORWARD_CPU;
    netConfig.numThread = 4;
    auto session = mnnNet->createSession(netConfig);

    auto input = mnnNet->getSessionInput(session, INPUT_NAME);
    if (input->elementSize() <= 4) {
     
        mnnNet->resizeTensor(input, {
     1, 3, IMAGE_VERIFY_SIZE, IMAGE_VERIFY_SIZE});
        mnnNet->resizeSession(session);
    }
    std::cout << "input shape: " << input->shape()[0] << " " << input->shape()[1] << " " << input->shape()[2] << " " << input->shape()[3] << std::endl;

    // preprocess image
    MNN::Tensor givenTensor(input, MNN::Tensor::CAFFE);
    // const int inputSize = givenTensor.elementSize();
    // std::cout << inputSize << std::endl;
    auto inputData = givenTensor.host<float>();
    cv::Mat bgr_image = cv::imread(image_path);
    cv::Mat norm_image;
    cv::resize(bgr_image, norm_image, cv::Size(IMAGE_VERIFY_SIZE, IMAGE_VERIFY_SIZE));
    for(int k = 0; k < 3; k++){
     
        for(int i = 0; i < norm_image.rows; i++){
     
            for(int j = 0; j < norm_image.cols; j++){
     
                const auto src = norm_image.at<cv::Vec3b>(i, j)[k];
                auto dst = 0.0;
                if(k == 0) dst = (float(src) / 255.0f - 0.485) / 0.229;
                if(k == 1) dst = (float(src) / 255.0f - 0.456) / 0.224;
                if(k == 2) dst = (float(src) / 255.0f - 0.406) / 0.225;
                inputData[k * IMAGE_VERIFY_SIZE * IMAGE_VERIFY_SIZE + i * IMAGE_VERIFY_SIZE + j] = dst;
            }
        }
    }
    input->copyFromHostTensor(&givenTensor);

    // run session
    mnnNet->runSession(session);

    // get output data
    auto output = mnnNet->getSessionOutput(session, OUTPUT_NAME);
    // std::cout << "output shape: " << output->shape()[0] << " " << output->shape()[1] << std::endl;
    auto output_host = std::make_shared<MNN::Tensor>(output, MNN::Tensor::CAFFE);
    output->copyToHostTensor(output_host.get());
    auto values = output_host->host<float>();
    
    // post process
    std::vector<float> output_values;
    auto exp_sum = 0.0;
    auto max_index = 0;
    for(int i = 0; i < CLASSES_SIZE; i++){
     
        if(values[i] > values[max_index]) max_index = i;
        output_values.push_back(values[i]);
        exp_sum += std::exp(values[i]);
    }
    std::cout << "cls id: " << max_index << std::endl;
    std::cout << "cls prob: " << std::exp(output_values[max_index]) / exp_sum << std::endl;

    return 0;
}

Makefile

.SUFFIXES: .cpp .o

CC = g++

SRCS = ./main.cpp

OBJS = $(SRCS:.cpp=.o)

OUTPUT = main

OPENCV_ROOT=/Users/xxx/opt/opencv-4.4.0/install_opencv
MNN_ROOT=/Users/xxx/opt/MNN

CFLAGS = -I$(OPENCV_ROOT)/include/opencv4 \
		 -I$(MNN_ROOT)/include \
		 -I$(MNN_ROOT)/include/MNN \
		 -I$(MNN_ROOT)/3rd_party/imageHelper \
		 -DEO_USE_MNN

LIBS += -L$(OPENCV_ROOT)/lib -lopencv_imgcodecs -lopencv_imgproc -lopencv_highgui -lopencv_core \
        -L$(MNN_ROOT)/build_mnn -lMNN

all : $(OBJS)
	$(CC) -o $(OUTPUT) $(OBJS) $(LIBS)
	@echo "----- OK -----"

.cpp.o :
	$(CC) -O3 -std=c++11 -Wall $(CFLAGS) -o $@ -c $<

clean :
	-rm -f $(OBJS)
	-rm -f .core*
	-rm $(OUTPUT)

编译运行

make -j8
./main mobilenet_v2-b0353104.mnn test.jpg

注:如果报错dyld: Library not loaded: @rpath/libMNN.dylib,需要配置MNN环境变量

vim ~/.bash_profile
export LD_LIBRARY_PATH=/Users/xxx/opt/MNN/build_mnn:$LD_LIBRARY_PATH # 末尾填上这一句
source ~/.bash_profile

注:C++的由于图像的读取,可能会有一定的精度损失,本人C++太菜,用的超笨的方法给图像归一化,减均值除以标准差。

Python API Demo

需要安装好PyTorch和MNN

pip install torch torchvision
pin install MNN

Demo 源码,下载地址:MNN Demos Python 提取码: kqp7

import MNN.expr as F
from torchvision import transforms
from PIL import Image

mnn_model_path = './mobilenet_v2-b0353104.mnn'
image_path = './test.jpg'
vars = F.load_as_dict(mnn_model_path)
inputVar = vars["input"]
# 查看输入信息
print('input shape: ', inputVar.shape)
# print(inputVar.data_format)

# 写入数据
input_image = Image.open(image_path)
preprocess = transforms.Compose([
    transforms.Resize(256),
    transforms.CenterCrop(224),
    transforms.ToTensor(),
    transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]),
])
input_tensor = preprocess(input_image)
inputVar.write(input_tensor.tolist())

# 查看输出结果
outputVar = vars['output']
print('output shape: ', outputVar.shape)
# print(outputVar.read())

cls_id = F.argmax(outputVar, axis=1).read()
cls_probs = F.softmax(outputVar, axis=1).read()

print("cls id: ", cls_id)
print("cls prob: ", cls_probs[0, cls_id])

注:经过测试,同pytorch的mobilenet模型结果一致。

总结

MNN也在不断更新维护,如果碰到一些离奇的bug,注意版本问题。

参考

  • Pytorch模型转成ONNX和MNN
  • MNN中文文档-语雀
  • MNN官网C++ Demo

你可能感兴趣的:(工程,mnn,深度学习算法工程化)