今天搞个测试,测试是在horovod下进行的。
问题就出在加载权重(参数)文件的地方,加载权重命令load_weights前要先build一下,结果就build出这么一个错误:
Exception ignored in: >
Traceback (most recent call last):
File "/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/data/ops/dataset_ops.py", line 3462, in __del__
AttributeError: 'NoneType' object has no attribute 'device'
这个报错真的是谜之报错,我根本没搞懂错哪了。但反正是注释掉build这句就不会报错。
于是我脱离horovod,在单纯的tensorflow2.1.0中重写了一下,发现没有问题,可以正常运行。
但是我一开始运行了一下发现报错是这样的:
Traceback (most recent call last):
File "error.py", line 30, in
mnist_model.build(input_shape = (None, 28 ,28))
File "/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/keras/engine/sequential.py", line 260, in build
super(Sequential, self).build(input_shape)
File "/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/keras/engine/network.py", line 682, in build
self.call(x, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/keras/engine/sequential.py", line 281, in call
outputs = layer(inputs, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/keras/engine/base_layer.py", line 737, in __call__
self.name)
File "/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/keras/engine/input_spec.py", line 177, in assert_input_compatibility
str(x.shape.as_list()))
ValueError: Input 0 of layer conv2d is incompatible with the layer: expected ndim=4, found ndim=3. Full shape received: [None, 28, 28]
Exception ignored in: >
Traceback (most recent call last):
File "/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/data/ops/dataset_ops.py", line 3462, in __del__
AttributeError: 'NoneType' object has no attribute 'device'
可以看到是model.build那句的问题,最后同样是AttributeError: 'NoneType' object has no attribute 'device'。
但是这次的错误提示比较多,里面有个有用信息就是倒数第5行的提示:
ValueError: Input 0 of layer conv2d is incompatible with the layer: expected ndim=4, found ndim=2. Full shape received: [None, 784]
原来是build输入尺度错了,修改正确后发现脱离horovod运行是不会报错的。
最后我无意间发现了这样的Warning提示:
[1,0]:2020-06-30 08:34:47.818137: W tensorflow/core/framework/cpu_allocator_impl.cc:81] Allocation of 376320000 exceeds 10% of system memory.
[1,0]:2020-06-30 08:34:49.030225: W tensorflow/core/framework/cpu_allocator_impl.cc:81] Allocation of 188160000 exceeds 10% of system memory.
这个提示我就很熟悉了,这是提示memory占用率过高啊。但是我的程序是一个mnist测试程序而已,又是放在相当不错的服务器上运行的,而且服务器我是一个人独占并没有其他任务占用memory,怎么会有这个提示那?同时,https://github.com/tensorflow/tensorflow/issues/35326里面也有人遇见这个错误并提出了一样的怀疑。
为了验证是不是这个问题我观察mem使用率在程序运行的全过程都没看见有超过10%。但是我还是把程序中关于数据处理部分的程序注释掉,发现果然能正常运行没有报错了。所以这个报错究竟是不是memory问题还有待考证,初步判断是和memory有可能有一定关系的。
下面把程序放上来详细说明一下:
import tensorflow as tf
import horovod.tensorflow.keras as hvd
import os
import datetime
import package
time_start = datetime.datetime.now()
# 初始化
Log,arg = package.initial()
# 指定GPU信息和operator
gpus, opt = package.gpu_setting('keras+tensorflow2.0', Log)
(mnist_images, mnist_labels), _ = \
tf.keras.datasets.mnist.load_data(path='mnist-%d.npz' % hvd.rank())
dataset = tf.data.Dataset.from_tensor_slices(
(tf.cast(mnist_images[..., tf.newaxis] / 255.0, tf.float32),
tf.cast(mnist_labels, tf.int64))
)
# dataset = dataset.repeat().shuffle(10000).batch(128)
dataset = dataset.repeat().batch(128)
mnist_model = tf.keras.Sequential([
tf.keras.layers.Conv2D(32, [3, 3], activation='relu'),
tf.keras.layers.Conv2D(64, [3, 3], activation='relu'),
tf.keras.layers.MaxPooling2D(pool_size=(2, 2)),
tf.keras.layers.Dropout(0.25),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(128, activation='relu'),
tf.keras.layers.Dropout(0.5),
tf.keras.layers.Dense(10, activation='softmax')
])
mnist_model.compile(loss=tf.losses.SparseCategoricalCrossentropy(),
optimizer=opt,
metrics=['accuracy'],
experimental_run_tf_function=False)
# weight_file = os.path.join(arg.ckp_path,'checkpoint-break-step-64.h5')
if hvd.rank()==0:
mnist_model.build(input_shape=(None, 28, 28, 1))
mnist_model.load_weights('checkpoint-1.h5')
第23行被注释掉的“# dataset = dataset.repeat().shuffle(10000).batch(128)”是原来的程序。
下面第24行“dataset = dataset.repeat().batch(128)”是我修改后的程序。
实验发现,23行换成24行(删除shuffle)就不会报错了。
难道真是内存的事???
只能说有一定关联吧,但应该也不是memory不足引起的。