首先训练模型时要定义好输出节点的名字。
这里https://github.com/onnx/tutorials/blob/master/tutorials/OnnxTensorflowExport.ipynb
说要在训练时加上代码
with open("graph.proto", "wb") as file:
graph = tf.get_default_graph().as_graph_def(add_shapes=True)
file.write(graph.SerializeToString())
但是如果这个代码加在训练阶段可能会引入许多训练时才有而预测时没有的op,所以这个代码加在预测时会更好。预测时加载完模型后再调用这个代码。
方法一:
1、下载tensorflow源码,和训练模型时的版本一样
2、下载合适的bazel版本,版本需要试一下找到兼容的版本
3、进入tensorflow源码目录编译freeze_graph工具,并执行freeze操作 ,参考https://github.com/onnx/tutorials/blob/master/tutorials/OnnxTensorflowExport.ipynb
bazel build tensorflow/python/tools:freeze_graph
bazel-bin/tensorflow/python/tools/freeze_graph \
--input_graph=/home/mnist-tf/graph.proto \
--input_checkpoint=/home/mnist-tf/ckpt/model.ckpt \
--output_graph=/tmp/frozen_graph.pb \
--output_node_names=fc2/add \
--input_binary=True
多个output_node_names用逗号,
隔开
其实可以不用编译,可以直接运行python文件,参考https://github.com/onnx/tensorflow-onnx
For example:
python -m tensorflow.python.tools.freeze_graph \
--input_graph=my_checkpoint_dir/graphdef.pb \
--input_binary=true \
--output_node_names=output \
--input_checkpoint=my_checkpoint_dir \
--output_graph=tests/models/fc-layers/frozen.pb
4、编译summarize_graph工具查看模型的input,output(input其实就是训练模型时定义的占位符的名字,比如input_image = tf.placeholder(tf.float32, shape=[config.BATCH_SIZE, image_size, image_size, 3], name='input_image')
,output是定义模型时定义的输出名字),参考https://github.com/onnx/tensorflow-onnx
To find the inputs and outputs for the TensorFlow graph the model developer will know or you can consult TensorFlow's summarize_graph tool, for example:
summarize_graph --in_graph=tests/models/fc-layers/frozen.pb
上面的示例说明我们需要编译tensorflow/tools/graph_transforms下的summarize_graph,进入特恼人flow目录执行以下命令编译
bazel build tensorflow/tools/graph_transforms:summarize_graph
编译好的程序路径为bazel-bin/tensorflow/tools/graph_transforms/summarize_graph
执行命令查看:
bazel-bin/tensorflow/tools/graph_transforms/summarize_graph --in_graph=path/frozen.pb
输出如下:
Found 1 possible inputs: (name=input_image, type=float(1), shape=[384,12,12,3])
No variables spotted.
Found 3 possible outputs: (name=cls_prob, op=Squeeze) (name=bbox_pred, op=Squeeze) (name=landmark_pred, op=Squeeze)
Found 1985 (1.99k) const parameters, 0 (0) variable parameters, and 0 control_edges
Op types used: 62 Const, 37 Identity, 14 Mul, 10 Add, 9 Sub, 8 FusedBatchNorm, 6 Conv2D, 5 Abs, 5 Relu, 4 RandomUniform, 3 Squeeze, 2 DepthwiseConv2dNative, 1 AdjustSaturation, 1 AdjustHue, 1 AdjustContrastv2, 1 MaxPool, 1 Placeholder, 1 Softmax
To use with tensorflow/tools/benchmark:benchmark_model try these arguments:
bazel run tensorflow/tools/benchmark:benchmark_model -- --graph=/data/proj/FaceLandmark/mtcnn/MTCNN-Tensorflow-master/data/MTCNN_model/PNet_landmark-Adam/onnx/frozen_graph.pb --show_flops --input_layer=input_image --input_layer_type=float --input_layer_shape=384,12,12,3 --output_layer=cls_prob,bbox_pred,landmark_pred
除了input和output还会输出一些其它的信息。
这个不会显示节点端口号,下面的代码可以显示端口号和更多信息
#https://blog.csdn.net/u012328159/article/details/81101074
import tensorflow as tf
#model_path = "pnet_frozen_model.pb"
import sys
model_path = sys.argv[1]
with tf.gfile.GFile(model_path, "rb") as f:
graph_def = tf.GraphDef()
graph_def.ParseFromString(f.read())
with tf.Graph().as_default() as graph: #
tf.import_graph_def(graph_def, name="")
for op in graph.get_operations(): #
print(op.name, op.values())
5、下载tensorflow-onnx源码https://github.com/onnx/tensorflow-onnx
用tf2onnx转换
示例:
python -m tf2onnx.convert\
--input tests/models/fc-layers/frozen.pb\
--inputs X:0\
--outputs output:0\
--output tests/models/fc-layers/model.onnx\
--verbose
https://github.com/onnx/tensorflow-onnx
Usage有说prefers a frozen TensorFlow graph
,说明可以直接转换原tensorflow模型的。我们已经freeze了,转换命令如下:
python -m tf2onnx.convert\
--input tests/models/fc-layers/frozen.pb\
--inputs X:0\
--outputs output:0\
--output tests/models/fc-layers/model.onnx\
--verbose
模型输入输出名字需要用 node_name:port_id的格式,要不然后面会转换出错。
有些tensorflow模型的op onnx不支持,这时可以使用 --custom-ops参数把这些op列出来,这时就会使用ai.onnx.converters.tensorflow里面定义的op来转换
--custom-ops
the runtime may support custom ops that are not defined in onnx. A user can asked the converter to map to custom ops by listing them with the --custom-ops option. Tensorflow ops listed here will be mapped to a custom op with the same name as the tensorflow op but in the onnx domain ai.onnx.converters.tensorflow. For example: --custom-ops Print will insert a op Print in the onnx domain ai.onnx.converters.tensorflow into the graph. We also support a python api for custom ops documented later in this readme.
比如加上--custom-ops AdjustContrastv2,AdjustHue,AdjustSaturation
完整命令示例:
python -m tf2onnx.convert --input path/frozen_graph.pb --inputs input_image:0 --outputs cls_prob:0,bbox_pred:0,landmark_pred:0 --output path/pnet.onnx --verbose --custom-ops AdjustContrastv2,AdjustHue,AdjustSaturation
方法二:
直接用tf2onnx转换,方法一最后一步有提到,参考https://github.com/onnx/tensorflow-onnx
模型输入输出名字需要用 node_name:port_id的格式。