在用 Keras 构建复杂的网络的过程中,报错
问题1:Type Error: unhashable type: ’ Dimension’
问题2:AttributeError: ‘Tensor’ object has no attribute ‘_keras_history’
参考链接:
05_keras入门多输入多输出模型(上)
github上此问题的讨论组
train.py 部分代码
(train_x, train_y), (test_x, test_y) = datasets.cifar10.load_data()
train_x = train_x / 255.0
test_x = test_x / 255.0
train_y = train_y.flatten()
test_y = test_y.flatten()
train_y = tf.one_hot(train_y, depth=10)# raise error place, erorr 1
test_y = tf.one_hot(test_y, depth=10)# raise error place
print(train_x.shape, train_y.shape, test_x.shape, test_y.shape)
model_input = Input(shape=(32, 32, 3))
model = LENET(model_input, Convway=convway, Poolingway=poolingway)
model.summary()
model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['acc'])
history = model.fit(x=train_x, y=train_y, batch_size=50, verbose=1, epochs=epochs, validation_data=(test_x, test_y))
model.py 部分代码
def LENET(input, Convway=False, Poolingway=False, **kwargs):
x = Conv2D(6, (5, 5), padding='same', name='lenet_conv1')(input)
x = Activation('relu', name='activation_1')(x)
if Convway:
print("using conv way! ")
x1 = convway(x, name='convway_1', Filters=int(x.shape[3]), concatenate=True)
x = x1 + x# rasie error 2
#x = Add()([x1, x])---right way
if Poolingway:
print("using pooling way!")
x = poolingway(x, name='poolingway_1') + x # raise error 2
#x = Add()([poolingway(x, name='poolingway_1'), x])
x = MaxPooling2D((2, 2), strides=(2, 2), name='lenet_maxpool1')(x)
x = Conv2D(16, (5, 5), padding='same', name='lenet_conv2')(x)
x = Activation('relu', name='activation_2')(x)
x = MaxPooling2D((2, 2), strides=2, name='lenet_maxpool2')(x)
x = Flatten()(x)
x = Dense(256, activation='relu')(x)
x = Dense(128, activation='relu')(x)
result = Dense(10, activation='softmax')(x)
model = Model(inputs=input, outputs=result, name="BaseModel")
return model
除去几句if语句外,其实就是一个LENET网络。不过是用keras的Model模块构建的。
所以,这里就是原因所在。
keras的inputs接口与张量的兼容性不太好,更多的是numpy数组或者字典类型的输入数据,
参考文章可以看这个05_keras入门多输入多输出模型(上)
所以,我们这里进行的one_hot编码操作改变了数据类型,使numpy变成了tensor(因为进行了tensorflow的操作,会转换为tensor数据类型)。
原来的:
train_y = tf.one_hot(train_y, depth=10)# raise error place
test_y = tf.one_hot(test_y, depth=10)# raise error place
修改后
train_y = to_categorical(train_y)
test_y = to_categorical(test_y)
AttributeError: ‘Tensor’ object has no attribute ‘_keras_history’
查了好多文章,和博客,发现是某些操作,不是keras的函数操作,因为是tensorflow as backbend,所以不是keras的就会转换为tensor操作,使相应的变量变为tensor,从而raise error。
下面这个链接是大家遇到了相同的问题的时候,怎么解决了的。
github上此问题的讨论组
简而言之,就是model中使用了不是keras的函数。
keras中的相加是用 Add()函数,而不是TF的sum,也不是二者直接相加(与tensorflow as backbend 有关系吧,可能直接加就编程TF的相加操作了)
z = Add()([x, y])
class Add(_Merge):
"""Layer that adds a list of inputs.
It takes as input a list of tensors,
all of the same shape, and returns
a single tensor (also of the same shape).
# Examples
```python
import keras
input1 = keras.layers.Input(shape=(16,))
x1 = keras.layers.Dense(8, activation='relu')(input1)
input2 = keras.layers.Input(shape=(32,))
x2 = keras.layers.Dense(8, activation='relu')(input2)
added = keras.layers.Add()([x1, x2]) # equivalent to added = keras.layers.add([x1, x2])
out = keras.layers.Dense(4)(added)
model = keras.models.Model(inputs=[input1, input2], outputs=out)
```
"""
def _merge_function(self, inputs):
output = inputs[0]
for i in range(1, len(inputs)):
output += inputs[i]
return output
而不是 TF 中的sum函数
代码见model.py和train.py
使用最新版的keras,可能 可能 可能 能解决这个问题。
(因为我也没试过,看别人说这个能解决)
此处举个例
把TF的操作写成调用层的形式,即自定义层。
此处以tf.image.resize_image函数为例
class UpSamplingBilinear(Layer):
def __init__(self, scale=4):
self.scale = scale
super(UpSamplingBilinear, self).__init__()
def build(self, input_shape):
self.shape = input_shape
super(UpSamplingBilinear, self).build(input_shape)
def call(self, inputs, **kwargs):
new_size = self.compute_output_shape(self.shape)[-2:]
x = K.permute_dimensions(inputs, [0, 2, 3, 1])
x = tf.image.resize_images(x, new_size)
x = K.permute_dimensions(x, [0, 3, 1, 2])
return x
def compute_output_shape(self, input_shape):
input_shape = list(input_shape)
input_shape[2] *= self.scale
input_shape[3] *= self.scale
return tuple(input_shape)