DeepLabV3+预测单张图片代码实现和问题解决

问题一:在使用export_model.py进行导出模型并进行网络正向传播的过程中,出现错误:

2019-03-25 17:47:42.896918: W tensorflow/core/framework/allocator.cc:122] Allocation of 62720000 exceeds 10% of system memory.
2019-03-25 17:47:43.105534: W tensorflow/core/framework/allocator.cc:122] Allocation of 62720000 exceeds 10% of system memory.
2019-03-25 17:47:43.406195: W tensorflow/core/framework/allocator.cc:122] Allocation of 62720000 exceeds 10% of system memory.
2019-03-25 17:47:43.663981: W tensorflow/core/framework/allocator.cc:122] Allocation of 63438848 exceeds 10% of system memory.
2019-03-25 17:47:51.507348: W tensorflow/core/framework/op_kernel.cc:1273] OP_REQUIRES failed at spacetobatch_op.cc:219 : Invalid argument: padded_shape[0]=49 is not divisible by block_shape[0]=2
2019-03-25 17:47:53.206279: W tensorflow/core/framework/op_kernel.cc:1273] OP_REQUIRES failed at spacetobatch_op.cc:219 : Invalid argument: padded_shape[0]=49 is not divisible by block_shape[0]=2
Traceback (most recent call last):
  File "/home/pzn/anaconda3/envs/DeepLabV3/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1334, in _do_call
    return fn(*args)
  File "/home/pzn/anaconda3/envs/DeepLabV3/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1319, in _run_fn
    options, feed_dict, fetch_list, target_list, run_metadata)
  File "/home/pzn/anaconda3/envs/DeepLabV3/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1407, in _call_tf_sessionrun
    run_metadata)
tensorflow.python.framework.errors_impl.InvalidArgumentError: padded_shape[0]=49 is not divisible by block_shape[0]=2
	 [[{
    {node import/xception_65/exit_flow/block2/unit_1/xception_module/separable_conv1_depthwise/depthwise/SpaceToBatchND}} = SpaceToBatchND[T=DT_FLOAT, Tblock_shape=DT_INT32, Tpaddings=DT_INT32, _device="/job:localhost/replica:0/task:0/device:GPU:0"](import/xception_65/exit_flow/block1/unit_1/xception_module/add, import/xception_65/exit_flow/block2/unit_1/xception_module/separable_conv1_depthwise/depthwise/SpaceToBatchND/block_shape, import/xception_65/exit_flow/block2/unit_1/xception_module/separable_conv1_depthwise/depthwise/SpaceToBatchND/paddings)]]

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "infer.py", line 30, in 
    result = sess.run(output)
  File "/home/pzn/anaconda3/envs/DeepLabV3/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 929, in run
    run_metadata_ptr)
  File "/home/pzn/anaconda3/envs/DeepLabV3/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1152, in _run
    feed_dict_tensor, options, run_metadata)
  File "/home/pzn/anaconda3/envs/DeepLabV3/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1328, in _do_run
    run_metadata)
  File "/home/pzn/anaconda3/envs/DeepLabV3/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1348, in _do_call
    raise type(e)(node_def, op, message)
tensorflow.python.framework.errors_impl.InvalidArgumentError: padded_shape[0]=49 is not divisible by block_shape[0]=2
	 [[node import/xception_65/exit_flow/block2/unit_1/xception_module/separable_conv1_depthwise/depthwise/SpaceToBatchND (defined at infer.py:26)  = SpaceToBatchND[T=DT_FLOAT, Tblock_shape=DT_INT32, Tpaddings=DT_INT32, _device="/job:localhost/replica:0/task:0/device:GPU:0"](import/xception_65/exit_flow/block1/unit_1/xception_module/add, import/xception_65/exit_flow/block2/unit_1/xception_module/separable_conv1_depthwise/depthwise/SpaceToBatchND/block_shape, import/xception_65/exit_flow/block2/unit_1/xception_module/separable_conv1_depthwise/depthwise/SpaceToBatchND/paddings)]]

Caused by op 'import/xception_65/exit_flow/block2/unit_1/xception_module/separable_conv1_depthwise/depthwise/SpaceToBatchND', defined at:
  File "infer.py", line 26, in 
    return_elements=["SemanticPredictions:0"])
  File "/home/pzn/anaconda3/envs/DeepLabV3/lib/python3.6/site-packages/tensorflow/python/util/deprecation.py", line 488, in new_func
    return func(*args, **kwargs)
  File "/home/pzn/anaconda3/envs/DeepLabV3/lib/python3.6/site-packages/tensorflow/python/framework/importer.py", line 442, in import_graph_def
    _ProcessNewOps(graph)
  File "/home/pzn/anaconda3/envs/DeepLabV3/lib/python3.6/site-packages/tensorflow/python/framework/importer.py", line 234, in _ProcessNewOps
    for new_op in graph._add_new_tf_operations(compute_devices=False):  # pylint: disable=protected-access
  File "/home/pzn/anaconda3/envs/DeepLabV3/lib/python3.6/site-packages/tensorflow/python/framework/ops.py", line 3440, in _add_new_tf_operations
    for c_op in c_api_util.new_tf_operations(self)
  File "/home/pzn/anaconda3/envs/DeepLabV3/lib/python3.6/site-packages/tensorflow/python/framework/ops.py", line 3440, in 
    for c_op in c_api_util.new_tf_operations(self)
  File "/home/pzn/anaconda3/envs/DeepLabV3/lib/python3.6/site-packages/tensorflow/python/framework/ops.py", line 3299, in _create_op_from_tf_operation
    ret = Operation(c_op, self)
  File "/home/pzn/anaconda3/envs/DeepLabV3/lib/python3.6/site-packages/tensorflow/python/framework/ops.py", line 1770, in __init__
    self._traceback = tf_stack.extract_stack()

InvalidArgumentError (see above for traceback): padded_shape[0]=49 is not divisible by block_shape[0]=2
	 [[node import/xception_65/exit_flow/block2/unit_1/xception_module/separable_conv1_depthwise/depthwise/SpaceToBatchND (defined at infer.py:26)  = SpaceToBatchND[T=DT_FLOAT, Tblock_shape=DT_INT32, Tpaddings=DT_INT32, _device="/job:localhost/replica:0/task:0/device:GPU:0"](import/xception_65/exit_flow/block1/unit_1/xception_module/add, import/xception_65/exit_flow/block2/unit_1/xception_module/separable_conv1_depthwise/depthwise/SpaceToBatchND/block_shape, import/xception_65/exit_flow/block2/unit_1/xception_module/separable_conv1_depthwise/depthwise/SpaceToBatchND/paddings)]]

解决方案:

在export_model.py时,通过传入参数crop_size给定图片的大小,要和正向传播时给定图片的大小相等,否则会报此错误。

问题二:

Executor failed to create kernel. Invalid argument: Default AvgPoolingOp only supports NHWC on device type CPU

google了众多解法皆不可行,一度怀疑是自己的模型出现了问题,错误还不能被定位到某个文件的某一行,结果换了一种方法去处理,就没有这个问题了,怀疑是框架的问题。

原方法代码实现:

import tensorflow as tf
import numpy as np
import cv2 as cv
from keras.preprocessing.image import load_img, img_to_array
 
#img = load_img(img_path)  # 输入预测图片的url

#img = load_img('datasets/testset/JPEGImages/2018_010001.jpg')
img = load_img('/home/pzn/anaconda3/envs/DeepLabV3/lib/python3.6/site-packages/tensorflow/models/research/deeplab/datasets/testset/JPEGImages/2018_010001.jpg')
img = img_to_array(img)  
img = np.expand_dims(img, axis=0).astype(np.uint8)  # uint8是之前导出模型时定义的
 
# 加载模型
sess = tf.Session()
#with open("output_model/frozen_inference_graph_473.pb", "rb") as f:
with open("/home/pzn/anaconda3/envs/DeepLabV3/lib/python3.6/site-packages/tensorflow/models/research/deeplab/output_model/frozen_inference_graph_0325.pb", "rb") as f:
    graph_def = tf.GraphDef()
    graph_def.ParseFromString(f.read())
    '''
    output = tf.import_graph_def(graph_def, input_map={"ImageTensor:0": img},
                                     return_elements=["SemanticPredictions:0"])
    # input_map 就是指明 输入是什么;
    # return_elements 就是指明输出是什么;两者在前面已介绍
    '''

    output = tf.import_graph_def(graph_def, input_map={"ImageTensor:0": img},
                                     return_elements=["SemanticPredictions:0"])



result = sess.run(output)
print(type(result))
#result.save('aaa.png')
#cv.imshow(result[0])
print(result[0].shape)  # (1, height, width)

print(result)
print(result[0])
#result[0].save('aaa.png')
cv.imwrite('aaa.png')

新方法代码实现:

import tensorflow as tf
import numpy as np
import cv2 as cv
from keras.preprocessing.image import load_img, img_to_array
from matplotlib import pyplot as plt

#img = load_img(img_path)  # 输入预测图片的url

#img = load_img('datasets/testset/JPEGImages/2018_010001.jpg')
img = load_img('/home/pzn/anaconda3/envs/DeepLabV3/lib/python3.6/site-packages/tensorflow/models/research/deeplab/datasets/testset/JPEGImages/2018_010001.jpg')
img = img_to_array(img)  
img = np.expand_dims(img, axis=0).astype(np.uint8)  # uint8是之前导出模型时定义的
 
# # 加载模型
# sess = tf.Session()
# #with open("output_model/frozen_inference_graph_473.pb", "rb") as f:
# with open("/home/pzn/anaconda3/envs/DeepLabV3/lib/python3.6/site-packages/tensorflow/models/research/deeplab/output_model/frozen_inference_graph_0325.pb", "rb") as f:
#     graph_def = tf.GraphDef()
#     graph_def.ParseFromString(f.read())
#     '''
#     output = tf.import_graph_def(graph_def, input_map={"ImageTensor:0": img},
#                                      return_elements=["SemanticPredictions:0"])
#     # input_map 就是指明 输入是什么;
#     # return_elements 就是指明输出是什么;两者在前面已介绍
#     '''

#     output = tf.import_graph_def(graph_def, input_map={"ImageTensor:0": img},
#                                      return_elements=["SemanticPredictions:0"])

graph = tf.Graph()
INPUT_TENSOR_NAME = 'ImageTensor:0'
OUTPUT_TENSOR_NAME = 'SemanticPredictions:0'
graph_def = None
graph_path = "/home/pzn/anaconda3/envs/DeepLabV3/lib/python3.6/site-packages/tensorflow/models/research/deeplab/output_model/frozen_inference_graph_0325.pb"
with tf.gfile.FastGFile(graph_path,'rb') as f:
    graph_def = tf.GraphDef()
    graph_def.ParseFromString(f.read())

if graph_def is None:
    raise RuntimeError('Cannot find inference graph in tar archive.')

with graph.as_default():
    tf.import_graph_def(graph_def, name='')

sess = tf.Session(graph=graph)
result = sess.run(
                OUTPUT_TENSOR_NAME,
                feed_dict={INPUT_TENSOR_NAME: img})

# result = sess.run(output)
print(type(result))
#result.save('aaa.png')
#cv.imshow(result[0])
print(result[0].shape)  # (1, height, width)

print(result)
print(result[0])
#result[0].save('aaa.png')
# img_ = img[:,:,::-1].transpose((2,0,1))

cv.imwrite('aaa.jpg',result.transpose((1,2,0)))
plt.imshow(result[0])
plt.show()

因为调试该bug还特地学习了一下,python代码的调试pdb,详见:

 

你可能感兴趣的:(深度学习,DeepLabV3+,export_model,infer)