【ONNX】动态输入尺寸的问题(多输出/多输入)

具体可以看一下这个回答

This is a very good question and it’s a topic we have been discussing repeatedly recently. The answer has three parts:

whether onnx supports representing models with dynamic shape
whether frontends (like pytorch) supports exporting models with dynamic shape
whether backends (like caffe2) supports importing models with dynamic shape
For 1, in serialization format’s level, onnx supports representing models with dynamic shape. If you look at TensorShapeProto which is used to describe the shape of the inputs and outputs, it has dim_param to represent symbolic/dynamic shape. However, at this point, all the tooling (checker, shape inference, etc) basically don’t handle symbolic shape but simply assume them to be static. There are ongoing work to update them, e.g. #632 is going to address this issue in shape inference. The work should not be too much and we can expect it happening soon.

For 2, depends on the type of frontends:

For tensorflow->onnx and caffe2->onnx, these frontends are doing static graph to static graph conversion, they normally support dynamic shape out of the box. But I do have seen few cases where due to different semantics of some operators between these frameworks and onnx, even the static conversion needs some shapes information, but these can be solved relatively easy (e.g. by adding support of the onnx semantics in the framework).
For pytorch->onnx or other similar frontends that use tracing (on limited set of inputs sample inputs), dynamic shape is a natural limitation but not technically impossible. By rewriting few places in the model that operates directly on concrete sizes to a form that can let the tracer be aware of these operations (see here for a pytorch example, exported models will be compatible with dynamic shapes. Recently we have scanned through some real world pytorch models (e.g.) and found actually all the size operations are pretty simple and thus easily rewritable.
For 3, the technical difficulties are similar to frontends that do static conversions, and thus should be relatively easy to tackle.

So my conclusion is, yes, dynamic shape is definitely something we should and will support in the near future, just it’s not ready yet at this point.

具体的这个issue下面也有给出了一些简单的策略:

model = onnx.load('model.onnx')
model.graph.input[0].type.tensor_type.shape.dim[0].dim_param = '?'
onnx.save(model, 'dynamic_model.onnx')

后面在实际使用过程中又遇到了多个输入/输出的情况下如何实现dynamis_axis

例如在使用RetinaFace的时候,输入的尺寸是NCHW,但是输出是三个元素组成的一个tuple,直接暴力使用上面提到的方法的确也是可行的,但是发现在使用onnxruntime开会话跑的时候遇到了如下的警告:

2021-07-20 09:38:30.057850600 [W:onnxruntime:, execution_frame.cc:721 VerifyOutputSizes] Expected shape from model of {1,15162,2} does not match actual shape of {1,15700,2} for output 837
2021-07-20 09:38:30.058313800 [W:onnxruntime:, execution_frame.cc:721 VerifyOutputSizes] Expected shape from model of {1,15162,10} does not match actual shape of {1,15700,10} for output 836

相对来说就很奇怪了,不是已经指定了动态输出了么,为什么依旧会有这个尺寸不匹配的警告,虽然结果是正确的,但是还是想弄明白一下。

后面看了一下torch.onnx的文档,写的还算明确。

dynamic_axes (dict> or dict, default empty dict)

a dictionary to specify dynamic axes of input/output, such that:

KEY: input and/or output names
VALUE: index of dynamic axes for given key and potentially the name to be used for exported dynamic axes. In general the value is defined according to one of the following ways or a combination of both:
(1). A list of integers specifying the dynamic axes of provided input. In this scenario automated names will be generated and applied to dynamic axes of provided input/output during export.
(2). An inner dictionary that specifies a mapping FROM the index of dynamic axis in corresponding input/output TO the name that is desired to be applied on such axis of such input/output during export.
Example. if we have the following shape for inputs and outputs:
shape(input_1) = (‘b’, 3, ‘w’, ‘h’)
and shape(input_2) = (‘b’, 4)
and shape(output) = (‘b’, ‘d’, 5)
Then dynamic axes can be defined either as:
ONLY INDICES:
dynamic_axes = {'input_1':[0, 2, 3], 'input_2':[0], 'output':[0, 1]}
where automatic names will be generated for exported dynamic axes
INDICES WITH CORRESPONDING NAMES:
dynamic_axes = {'input_1':{0:'batch', 1:'width', 2:'height'}, 'input_2':{0:'batch'}, 'output':{0:'batch', 1:'detections'}}
where provided names will be applied to exported dynamic axes
MIXED MODE OF (1) and (2):
dynamic_axes = {'input_1':[0, 2, 3], 'input_2':{0:'batch'}, 'output':[0,1]}

        其实大致就明确了使用方法了,比如我的输入是一个元素,元素的尺寸为NCHW,那么只要按照这么写就行:

dynamic_axes = {
    'input': {0: 'batch_size', 1: 'channel', 2: "height", 3: 'width'},  # 这么写表示NCHW都会变化
}

torch.onnx.export(model,  # model being run
                  x,  # model input (or a tuple for multiple inputs)
                  'save.onnx',
                  export_params=True,
                  opset_version=11,  # the ONNX version to export the model to
                  do_constant_folding=True,
                  input_names=['input'],  # the model's input names
                  dynamic_axes=dynamic_axes)

        而对于我的情况,我存在一个输入,多个输出(RetinaFace的BOX,LANDMARK,以及置信度)构成的一个tuple的状况则要写多个output_names,例如:

dynamic_axes = {
    'input': {0: 'batch_size', 1: 'channel', 2: "height", 3: 'width'},
    'output0': {0: 'batch_size', 1: 'feature_maps'},
    'output1': {0: 'batch_size', 1: 'feature_maps'},
    'output2': {0: 'batch_size', 1: 'feature_maps'}
}

torch.onnx.export(model,  # model being run
                  x,  # model input (or a tuple for multiple inputs)
                  'face_detection_s.onnx',
                  export_params=True, 
                  opset_version=11, 
                  do_constant_folding=True,
                  input_names=['input'], 
                  output_names=['output0', 'output1', 'output2'],  # 多个输出names
                  dynamic_axes=dynamic_axes)  # 这里对应的字典也要指定

以上再次运行就不会有错误了。


Thanks to:https://blog.csdn.net/weixin_38443388/article/details/108677003

你可能感兴趣的:(AI/ML/DL,深度学习,python,pytorch)