MNN框架学习(一):编译及使用

    1.编译MNNConvert工具:

    需要预先安装protobuf >= 3.1.0, protobuf >= 3.0

    gcc版本高于4.9

cd MNN/tools/converter
./generate_schema.sh
mkdir build
cd build && cmake .. && make -j4

# or execute the shell script directly
./build_tool.sh

    2.其他框架模型转MNN模型:

Usage:
  MNNConvert [OPTION...]
  -h, --help            Convert Other Model Format To MNN Model
  -v, --version         show current version
  -f, --framework arg   model type, ex: [TF,CAFFE,ONNX,TFLITE,MNN]
      --modelFile arg   tensorflow Pb or caffeModel, ex: *.pb,*caffemodel
      --prototxt arg    only used for caffe, ex: *.prototxt
      --MNNModel arg    MNN model, ex: *.mnn
      --benchmarkModel  Do NOT save big size data, such as Conv's weight,BN's
                        gamma,beta,mean and variance etc. Only used to test
                        the cost of the model
      --bizCode arg     MNN Model Flag, ex: MNN
      --debug           Enable debugging mode.

    例如:将mobilenet的caffe模型转成MNN模型

    首先从shicai的MobileNet-Caffe项目中下载mobilenet_v1和mobilenet_v2模型,然后将模型拷贝到MNN/tools/converter/build路径下,并依次运行如下指令,就可以将mobilenet的caffe模型转换成MNN的模型:

./MNNConvert -f CAFFE --modelFile mobilenet.caffemodel --prototxt mobilenet_deploy.prototxt --MNNModel mobilenet_v1.caffe.mnn --bizCode MNN 
./MNNConvert -f CAFFE --modelFile mobilenet_v2.caffemodel --prototxt mobilenet_v2_deploy.prototxt --MNNModel mobilenet_v2.caffe.mnn --bizCode MNN

  转Tensorflow、onnx、tensorflow-lite模型的过程与之类似:

  Tensorflow->MNN:

/MNNConvert -f TF --modelFile XXX.pb --MNNModel XXX.mnn --bizCode MNN

 Tensorflow Lite -> MNN:

./MNNConvert -f TFLITE --modelFile XXX.tflite --MNNModel XXX.mnn --bizCode MNN

  Onnx -> MNN:

./MNNConvert -f ONNX --modelFile XXX.onnx --MNNModel XXX.mnn --bizCode MNN

  Pytorch -> MNN:pytorch  -> onnx -> MNN

import torch
import torchvision
dummy_input = torch.randn(10, 3, 224, 224, device='cuda')
model = torchvision.models.alexnet(pretrained=True).cuda()
# Providing input and output names sets the display names for values
# within the model's graph. Setting these does not change the semantics
# of the graph; it is only for readability.
#
# The inputs to the network consist of the flat list of inputs (i.e.
# the values you would pass to the forward() method) followed by the
# flat list of parameters. You can partially specify names, i.e. provide
# a list here shorter than the number of inputs to the model, and we will
# only set that subset of names, starting from the beginning.
input_names = [ "actual_input_1" ] + [ "learned_%d" % i for i in range(16) ]
output_names = [ "output1" ]
torch.onnx.export(model, dummy_input, "alexnet.onnx", verbose=True, input_names=input_names, output_names=output_names)

  3. 编译MNN项目

  不过不需要去跑benchmark,直接进入到MNN源码的根目录:

mkdir build && cd build && cmake .. && make -j4

  如果需要测试一下MNN的benchmark,就需要利用脚本获取测试模型:

cd /path/to/MNN
./schema/generate.sh
./tools/script/get_model.sh # optional, models are needed only in demo project

  4. 测试demo

直接贴一下我后来写的mnn_example吧: 

 https://github.com/MirrorYuChen/mnn_example/tree/master/src

  参考资料:

  [1] https://www.yuque.com/mnn/en/cvrt_linux

  [2] https://www.yuque.com/mnn/en/model_convert

  [3]https://github.com/xindongzhang/MNN-APPLICATIONS

  [4]https://www.yuque.com/mnn/en/model_convert

你可能感兴趣的:(MNN学习笔记)