解决方法,增加系统虚拟内存.
D:\Anaconda3\lib\site-packages\torch\onnx\symbolic_helper.py:266: UserWarning: You are trying to export the model with onnx:Resize for ONNX opset version 10. This operator might cause results to not match the expected results by PyTorch.
ONNX’s Upsample/Resize operator did not match Pytorch’s Interpolation until opset 11. Attributes to determine how to transform the input were added in onnx:Resize in opset 11 to support Pytorch’s behavior (like coordinate_transformation_mode and nearest_mode).
We recommend using opset 11 and above for models using this operator.
warnings.warn("You are trying to export the model with " + onnx_op + " for ONNX opset version "
…
翻译一下
您正在尝试用导出模型onnx:Resize for ONNX opset版本10。此运算符可能导致结果与PyTorch的预期结果不匹配。
直到opset 11,ONNX的Upsample/Resize操作符才与Pytorch的插值匹配onnx:Resize in opset 11支持Pytorch的行为(比如坐标转换模式和最近的模式)。
对于使用此运算符的模型,我们建议使用opset 11及更高版本。
大概意思就是要升级一下版本. 至于是哪个版本,百度一下
You are trying to export the model with onnx:Resize for ONNX opset version 10
结果在 https://blog.csdn.net/flyfish1986/article/details/115031540 找到了答案, 感谢博主.
解决方案
默认opset version 是9,更改为11,
import torch
torch.onnx.export(model, …, opset_version=11)
后来还是提示有版本问题, 于是抱着试试看的态度我有改成了12. 结果就通了…
附加一下我的转换代码. 也是抄自别人博客的. 做了点小修改, 注意使用的时候cpu和cuda的问题. 我的cuda 内存太小于是用的cpu转的. 就是比较吃内存, 反正就运行一次. 能用就好.
#
# demo.py
# 识别单张图片
#
import argparse
import os
import numpy as np
import time
from modeling.deeplab import *
from dataloaders import custom_transforms as tr
from PIL import Image
import torch
from torchvision import transforms
from dataloaders.utils import *
from utils.ColorMap import colors
import matplotlib
matplotlib.use('TkAgg')
import matplotlib.pyplot as plt
# from torchvision.utils import make_grid, save_image,to_image
palette = np.array(colors).reshape(-1).tolist()
def main():
global palette
print("start to onnx ...")
parser = argparse.ArgumentParser(description="PyTorch DeeplabV3Plus Training")
parser.add_argument('--checkpoint_file', type=str, required=True, help='image to test')
parser.add_argument('--export_onnx_file', type=str, required=True, help='image out path')
parser.add_argument('--backbone', type=str, default='resnet', choices=['resnet', 'xception', 'drn', 'mobilenet'], help='backbone name (default: resnet)')
parser.add_argument('--out_stride', type=int, default=16, help='network output stride (default: 8)')
parser.add_argument('--num_classes', type=int, default=2, help='crop image size')
parser.add_argument('--sync_bn', type=bool, default=None, help='whether to use sync bn (default: auto)')
parser.add_argument('--batch_size', type=int, default=4, help=' batch size ')
# parser.add_argument('--out-path', type=str, required=True, help='mask image to save')
# parser.add_argument('--ckpt', type=str, default='deeplab-resnet.pth', help='saved model')
# parser.add_argument('--no-cuda', action='store_true', default=False, help='disables CUDA training')
# parser.add_argument('--gpu-ids', type=str, default='0', help='use which gpu to train, must be a \ comma-separated list of integers only (default=0)')
# parser.add_argument('--dataset', type=str, default='pascal', choices=['pascal', 'coco', 'cityscapes','invoice'], help='dataset name (default: pascal)')
# parser.add_argument('--crop-size', type=int, default=513, help='crop image size')
parser.add_argument('--freeze_bn', type=bool, default=True, help='whether to freeze bn parameters (default: False)')
args = parser.parse_args()
# args.cuda = not args.no_cuda and torch.cuda.is_available()
model = DeepLab(num_classes=args.num_classes,
backbone=args.backbone,
output_stride=args.out_stride,
sync_bn=args.sync_bn,
freeze_bn=args.freeze_bn)
ckpt = torch.load(args.checkpoint_file, map_location=lambda storage, loc: storage.cuda(0))
# ckpt = torch.load(args.checkpoint_file, map_location='cpu')
model.load_state_dict(ckpt['state_dict'])
# if torch.cuda.is_available():
# # model = model.cuda()
# device = torch.device("cuda:0")
# model.to(device)
model.eval()
batch_size = args.batch_size #批处理大小
input_shape = (3, 1080, 1920) #输入数据,改成自己的输入shape
inputx = torch.randn(args.batch_size, *input_shape) # 生成张量
# inputx = inputx.to('cpu')
# inputx = inputx.cuda()
# export_onnx_file = os.path.join(Seting.AIModelsDir_Production, "deeplabv3model.onnx") # 目的ONNX文件名
torch.onnx.export(model,
inputx,
args.export_onnx_file,
opset_version=12,
do_constant_folding=True, # 是否执行常量折叠优化
input_names=["input"], # 输入名
output_names=["output"], # 输出名
dynamic_axes={"input":{0:"batch_size"}, # 批处理变量
"output":{0:"batch_size"}}
)
# print("image save in image_path.")
print("end to onnx ...")
if __name__ == "__main__":
main()