上一节学习了以TensorFlow
为底端的keras
接口最简单的使用,这里就继续学习怎么写卷积分类模型和各种保存方法(仅保存权重、权重和网络结构同时保存)
国际惯例,参考博客:
官方教程
【注】其实不用看博客,直接翻到文末看我的colab
就行,里面涵盖了学习方法,包括自己的出错内容和一些简单笔记,下面为了展示方便,每次都重新定义了网络结构,对Python
熟悉的大佬可以直接def create_model():
函数,把模型结构保存起来,后面直接调用就行
回顾一下上篇博客介绍的构建模型方法,有两种写法:
model = keras.models.Sequential([
keras.layers.Flatten(...),
keras.layers.Dense(...),
...
])
model = keras.models.Sequential()
model.add(keras.layers.Flatten(...))
model.add(keras.layers.Dense(...))
])
第一种简单,第二种舒服,本博文采用第二种写法构建一个简单的卷积网络
保存模型需要路径(引入os
),数据归一化处理(引入numpy
),此外注意,虽然我们学习keras,但是不仅要引入keras,还得引入tensorflow,具体原因后续再说
import tensorflow as tf
from tensorflow import keras
import numpy as np
import matplotlib.pyplot as plt
import os
还是用mnist
吧,后续根据需要出一个训练本地图片数据的教程,看看是不是还得数据流操作
注意,要把标签改为单热度编码格式,数据也得归一化
mnist_dataset = keras.datasets.mnist
(train_x,train_y),(test_x,test_y)= mnist_dataset.load_data()
train_y = keras.utils.to_categorical(train_y,10)
test_y = keras.utils.to_categorical(test_y,10)
train_x = train_x / 255.0
test_x = test_x / 255.0
还得注意就是keras
的卷积操作接受的数据是一个思维矩阵,需要指定是channels_first
即<样本,通道,行,列>, 还是channels_last
即<样本,行,列,通道>,默认最后的维度是通道(channels_last
)
train_x = train_x[ ..., np.newaxis ]
test_x = test_x[..., np.newaxis ]
print(train_x.shape)#(60000, 28, 28, 1)
构建简单的AlexNet
,但是直接用这个结构可能有问题,因为输入图片总共28\*28
,经过多次卷积池化会越变越小,最后可能都不够做卷积池化了,为稍微改了改
model = keras.models.Sequential()
model.add( keras.layers.Conv2D( filters = 64, kernel_size=(11,11),strides = (1,1), padding='valid', activation= tf.keras.activations.relu) )
model.add( keras.layers.MaxPool2D( pool_size=(2,2),strides=(2,2) ))
model.add( keras.layers.Conv2D( filters = 192, kernel_size=(5,5),strides = (1,1), padding='same', activation= tf.keras.activations.relu) )
model.add( keras.layers.MaxPool2D( pool_size=(2,2),strides=(2,2) ))
model.add( keras.layers.Conv2D( filters = 384, kernel_size=(3,3),strides = (1,1), padding='same', activation= tf.keras.activations.relu) )
model.add( keras.layers.Conv2D( filters = 384, kernel_size=(3,3),strides = (1,1), padding='same', activation= tf.keras.activations.relu) )
model.add( keras.layers.Conv2D( filters = 256, kernel_size=(3,3),strides = (1,1), padding='same', activation= tf.keras.activations.relu) )
model.add( keras.layers.MaxPool2D( pool_size=(2,2),strides=(2,2) ))
model.add( keras.layers.Flatten() )
model.add( keras.layers.Dense( units=4096, activation= keras.activations.relu ) )
model.add( keras.layers.Dropout(rate=0.5) )
model.add( keras.layers.Dense( units=4096, activation= keras.activations.relu ) )
model.add( keras.layers.Dropout(rate=0.5) )
model.add( keras.layers.Dense(units=10 , activation= keras.activations.softmax ) )
在keras
中关于交叉熵分类有两个函数sparse_categorical_crossentropy
和categorical_crossentropy
,这里就出现了第一个坑,如果将标签[batch_size,10]输入到编译器,使用sparse_...
的时候回报错
logits and labels must have the same first dimension,got logits shape [200,10] and labels shape [2000]
好像是默认把他拉长拼起来了,所以我们要使用后者
model.compile( optimizer= keras.optimizers.Adam(),loss= keras.losses.categorical_crossentropy, metrics=['accuracy'] )
然后就可以训练模型了
model.fit(train_x,train_y,epochs=2, batch_size=200)
'''
Epoch 1/2
60000/60000 [==============================] - 26s 435us/step - loss: 0.2646 - acc: 0.9110
Epoch 2/2
60000/60000 [==============================] - 24s 407us/step - loss: 0.0510 - acc: 0.9855
'''
还能看网络结构和参数,使用summary()
函数
model.summary()
'''
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
conv2d (Conv2D) multiple 7808
_________________________________________________________________
max_pooling2d (MaxPooling2D) multiple 0
_________________________________________________________________
conv2d_1 (Conv2D) multiple 307392
_________________________________________________________________
max_pooling2d_1 (MaxPooling2 multiple 0
_________________________________________________________________
conv2d_2 (Conv2D) multiple 663936
_________________________________________________________________
conv2d_3 (Conv2D) multiple 1327488
_________________________________________________________________
conv2d_4 (Conv2D) multiple 884992
_________________________________________________________________
max_pooling2d_2 (MaxPooling2 multiple 0
_________________________________________________________________
flatten (Flatten) multiple 0
_________________________________________________________________
dense (Dense) multiple 4198400
_________________________________________________________________
dropout (Dropout) multiple 0
_________________________________________________________________
dense_1 (Dense) multiple 16781312
_________________________________________________________________
dropout_1 (Dropout) multiple 0
_________________________________________________________________
dense_2 (Dense) multiple 40970
=================================================================
Total params: 24,212,298
Trainable params: 24,212,298
Non-trainable params: 0
_________________________________________________________________
'''
可以用测试集评估模型
print(test_x.shape, test_y.shape)
model.evaluate( test_x ,test_y)
'''
(10000, 28, 28, 1) (10000, 10)
10000/10000 [==============================] - 3s 283us/step
[0.03784323987539392, 0.9897]
'''
还能预测单张图片,但是要注意输入的第一个维度是样本数, 记得增加一个维度
test_img_idx = 1000
test_img = test_x[test_img_idx,...]
test_img= test_img[np.newaxis,...]
img_prob = model.predict( test_img )
plt.figure()
plt.imshow( np.squeeze(test_img) )
plt.title(img_prob.argmax())
函数为keras.callbacks.ModelCheckpoint
checkpoint_path='./train_save/mnist.ckpt'
checkpoint_dir= os.path.dirname(checkpoint_path)
# 创建检查回调点
cp_callback= keras.callbacks.ModelCheckpoint( checkpoint_path, save_weights_only= True, verbose=1 )
model.fit(train_x, train_y,epochs=2, validation_data=(test_x,test_y), callbacks=[cp_callback] )
'''
Train on 60000 samples, validate on 10000 samples
Epoch 1/2
59968/60000 [============================>.] - ETA: 0s - loss: 0.1442 - acc: 0.9681
Epoch 00001: saving model to ./train_save/mnist.ckpt
WARNING:tensorflow:This model was compiled with a Keras optimizer () but is being saved in TensorFlow format with `save_weights`. The model's weights will be saved, but unlike with TensorFlow optimizers in the TensorFlow format the optimizer's state will not be saved.
Consider using a TensorFlow optimizer from `tf.train`.
60000/60000 [==============================] - 93s 2ms/step - loss: 0.1442 - acc: 0.9681 - val_loss: 0.0693 - val_acc: 0.9811
Epoch 2/2
59968/60000 [============================>.] - ETA: 0s - loss: 0.0757 - acc: 0.9840
Epoch 00002: saving model to ./train_save/mnist.ckpt
WARNING:tensorflow:This model was compiled with a Keras optimizer () but is being saved in TensorFlow format with `save_weights`. The model's weights will be saved, but unlike with TensorFlow optimizers in the TensorFlow format the optimizer's state will not be saved.
Consider using a TensorFlow optimizer from `tf.train`.
60000/60000 [==============================] - 92s 2ms/step - loss: 0.0757 - acc: 0.9839 - val_loss: 0.0489 - val_acc: 0.9876
'''
发现有个warnning
,意思是说模型使用的是keras
的优化器,保存以后不是tensorflow
能直接使用的模型格式,好像少了个状态,需要使用tensorflow
自带的优化器,好吧,调整代码
import os
model.compile(optimizer = tf.train.AdamOptimizer(),loss = keras.losses.categorical_crossentropy, metrics=['accuracy'] )
checkpoint_path='./train_save2/mnist.ckpt'
checkpoint_dir= os.path.dirname(checkpoint_path)
# 创建检查回调点
cp_callback= keras.callbacks.ModelCheckpoint( checkpoint_path, save_weights_only= True, verbose=1 )
model.fit(train_x, train_y,epochs=2, validation_data=(test_x,test_y), callbacks=[cp_callback] )
'''
Train on 60000 samples, validate on 10000 samples
Epoch 1/2
59936/60000 [============================>.] - ETA: 0s - loss: 0.2241 - acc: 0.9325
Epoch 00001: saving model to ./train_save2/mnist.ckpt
60000/60000 [==============================] - 60s 1ms/step - loss: 0.2239 - acc: 0.9326 - val_loss: 0.1009 - val_acc: 0.9765
Epoch 2/2
59936/60000 [============================>.] - ETA: 0s - loss: 0.0866 - acc: 0.9801
Epoch 00002: saving model to ./train_save2/mnist.ckpt
60000/60000 [==============================] - 56s 930us/step - loss: 0.0867 - acc: 0.9800 - val_loss: 0.0591 - val_acc: 0.9855
'''
这回没出错了,尝试构建一个没训练的模型,将参数载入进来
model_test = keras.models.Sequential()
model_test.add( keras.layers.Conv2D( filters = 64, kernel_size=(11,11),strides = (1,1), padding='valid', activation= tf.keras.activations.relu) )
model_test.add( keras.layers.MaxPool2D( pool_size=(2,2),strides=(2,2) ))
model_test.add( keras.layers.Conv2D( filters = 192, kernel_size=(5,5),strides = (1,1), padding='same', activation= tf.keras.activations.relu) )
model_test.add( keras.layers.MaxPool2D( pool_size=(2,2),strides=(2,2) ))
model_test.add( keras.layers.Conv2D( filters = 384, kernel_size=(3,3),strides = (1,1), padding='same', activation= tf.keras.activations.relu) )
model_test.add( keras.layers.Conv2D( filters = 384, kernel_size=(3,3),strides = (1,1), padding='same', activation= tf.keras.activations.relu) )
model_test.add( keras.layers.Conv2D( filters = 256, kernel_size=(3,3),strides = (1,1), padding='same', activation= tf.keras.activations.relu) )
model_test.add( keras.layers.MaxPool2D( pool_size=(2,2),strides=(2,2) ))
model_test.add( keras.layers.Flatten() )
model_test.add( keras.layers.Dense( units=4096, activation= keras.activations.relu ) )
model_test.add( keras.layers.Dropout(rate=0.5) )
model_test.add( keras.layers.Dense( units=4096, activation= keras.activations.relu ) )
model_test.add( keras.layers.Dropout(rate=0.5) )
model_test.add( keras.layers.Dense(units=10 , activation= keras.activations.softmax ) )
model_test.compile(optimizer = tf.train.RMSPropOptimizer(learning_rate=0.01),loss = keras.losses.categorical_crossentropy, metrics=['accuracy'] )
载入最近的检查点
! ls train_save
latest = tf.train.latest_checkpoint('train_save2')#checkpoint mnist.ckpt.data-00000-of-00001 mnist.ckpt.index
type(latest)#str
loss,acc = model_test.evaluate(test_x,test_y)
print("未载入权重时:准确率{:5.2f}%".format(100*acc))
model_test.load_weights(latest)
loss,acc = model_test.evaluate(test_x,test_y)
print("载入权重时:准确率{:5.2f}%".format(100*acc))
'''
10000/10000 [==============================] - 3s 308us/step
未载入权重时:准确率 9.60%
10000/10000 [==============================] - 3s 264us/step
载入权重时:准确率98.60%
'''
也可以指定多少次训练保存一次检查点,这样能够有效防止过拟合,以后自己可以挑选比较好的训练参数
checkpoint_path="train_save3/cp-{epoch:04d}.ckpt"
checkpoint_dir = os.path.dirname(checkpoint_path)
cp_callback = keras.callbacks.ModelCheckpoint(checkpoint_path,verbose=1, save_weights_only=True, period=1)
model.fit(train_x,train_y,epochs=2, callbacks=[cp_callback], validation_data=[test_x,test_y],verbose=1)
'''
Train on 60000 samples, validate on 10000 samples
Epoch 1/2
59936/60000 [============================>.] - ETA: 0s - loss: 0.0447 - acc: 0.9897
Epoch 00001: saving model to train_save3/cp-0001.ckpt
60000/60000 [==============================] - 61s 1ms/step - loss: 0.0446 - acc: 0.9897 - val_loss: 0.0421 - val_acc: 0.9920
Epoch 2/2
59936/60000 [============================>.] - ETA: 0s - loss: 0.0478 - acc: 0.9885
Epoch 00002: saving model to train_save3/cp-0002.ckpt
60000/60000 [==============================] - 61s 1ms/step - loss: 0.0478 - acc: 0.9884 - val_loss: 0.0590 - val_acc: 0.9859
'''
重新构建一个未训练的模型,调用第一次的训练结果
model_test1 = keras.models.Sequential()
model_test1.add( keras.layers.Conv2D( filters = 64, kernel_size=(11,11),strides = (1,1), padding='valid', activation= tf.keras.activations.relu) )
model_test1.add( keras.layers.MaxPool2D( pool_size=(2,2),strides=(2,2) ))
model_test1.add( keras.layers.Conv2D( filters = 192, kernel_size=(5,5),strides = (1,1), padding='same', activation= tf.keras.activations.relu) )
model_test1.add( keras.layers.MaxPool2D( pool_size=(2,2),strides=(2,2) ))
model_test1.add( keras.layers.Conv2D( filters = 384, kernel_size=(3,3),strides = (1,1), padding='same', activation= tf.keras.activations.relu) )
model_test1.add( keras.layers.Conv2D( filters = 384, kernel_size=(3,3),strides = (1,1), padding='same', activation= tf.keras.activations.relu) )
model_test1.add( keras.layers.Conv2D( filters = 256, kernel_size=(3,3),strides = (1,1), padding='same', activation= tf.keras.activations.relu) )
model_test1.add( keras.layers.MaxPool2D( pool_size=(2,2),strides=(2,2) ))
model_test1.add( keras.layers.Flatten() )
model_test1.add( keras.layers.Dense( units=4096, activation= keras.activations.relu ) )
model_test1.add( keras.layers.Dropout(rate=0.5) )
model_test1.add( keras.layers.Dense( units=4096, activation= keras.activations.relu ) )
model_test1.add( keras.layers.Dropout(rate=0.5) )
model_test1.add( keras.layers.Dense(units=10 , activation= keras.activations.softmax ) )
model_test1.compile(optimizer = tf.train.AdamOptimizer(learning_rate=0.01),loss = keras.losses.categorical_crossentropy, metrics=['accuracy'] )
选择第一个检查点载入
loss,acc=model_test1.evaluate(test_x,test_y)
print("未载入权重时:准确率{:5.2f}%".format(100*acc))
model_test1.load_weights("train_save3/cp-0001.ckpt")
loss,acc=model_test1.evaluate(test_x,test_y)
print("载入权重时:准确率{:5.2f}%".format(100*acc))
'''
10000/10000 [==============================] - 3s 260us/step
未载入权重时:准确率10.28%
10000/10000 [==============================] - 2s 244us/step
载入权重时:准确率98.30%
'''
在训练完毕以后,也可以自行调用save_weights
函数保存权重
model.save_weights('./train_save3/mnist_checkpoint')
构建未训练模型
model_test2 = keras.models.Sequential()
model_test2.add( keras.layers.Conv2D( filters = 64, kernel_size=(11,11),strides = (1,1), padding='valid', activation= tf.keras.activations.relu) )
model_test2.add( keras.layers.MaxPool2D( pool_size=(2,2),strides=(2,2) ))
model_test2.add( keras.layers.Conv2D( filters = 192, kernel_size=(5,5),strides = (1,1), padding='same', activation= tf.keras.activations.relu) )
model_test2.add( keras.layers.MaxPool2D( pool_size=(2,2),strides=(2,2) ))
model_test2.add( keras.layers.Conv2D( filters = 384, kernel_size=(3,3),strides = (1,1), padding='same', activation= tf.keras.activations.relu) )
model_test2.add( keras.layers.Conv2D( filters = 384, kernel_size=(3,3),strides = (1,1), padding='same', activation= tf.keras.activations.relu) )
model_test2.add( keras.layers.Conv2D( filters = 256, kernel_size=(3,3),strides = (1,1), padding='same', activation= tf.keras.activations.relu) )
model_test2.add( keras.layers.MaxPool2D( pool_size=(2,2),strides=(2,2) ))
model_test2.add( keras.layers.Flatten() )
model_test2.add( keras.layers.Dense( units=4096, activation= keras.activations.relu ) )
model_test2.add( keras.layers.Dropout(rate=0.5) )
model_test2.add( keras.layers.Dense( units=4096, activation= keras.activations.relu ) )
model_test2.add( keras.layers.Dropout(rate=0.5) )
model_test2.add( keras.layers.Dense(units=10 , activation= keras.activations.softmax ) )
model_test2.compile(optimizer = tf.train.AdamOptimizer(learning_rate=0.01),loss = keras.losses.categorical_crossentropy, metrics=['accuracy'] )
读取权重以及评估模型
loss,acc = model_test2.evaluate(test_x,test_y)
print("未载入权重时:准确率{:5.2f}%".format(100*acc))
model_test2.load_weights('./train_save3/mnist_checkpoint')
loss,acc = model_test2.evaluate(test_x,test_y)
print("载入权重时:准确率{:5.2f}%".format(100*acc))
'''
10000/10000 [==============================] - 3s 303us/step
未载入权重时:准确率12.15%
10000/10000 [==============================] - 3s 260us/step
载入权重时:准确率98.59%
'''
同时保存模型与参数
构建未训练模型
model_test3 = keras.models.Sequential()
model_test3.add( keras.layers.Conv2D( filters = 64, kernel_size=(11,11),strides = (1,1), padding='valid', activation= tf.keras.activations.relu) )
model_test3.add( keras.layers.MaxPool2D( pool_size=(2,2),strides=(2,2) ))
model_test3.add( keras.layers.Conv2D( filters = 192, kernel_size=(5,5),strides = (1,1), padding='same', activation= tf.keras.activations.relu) )
model_test3.add( keras.layers.MaxPool2D( pool_size=(2,2),strides=(2,2) ))
model_test3.add( keras.layers.Conv2D( filters = 384, kernel_size=(3,3),strides = (1,1), padding='same', activation= tf.keras.activations.relu) )
model_test3.add( keras.layers.Conv2D( filters = 384, kernel_size=(3,3),strides = (1,1), padding='same', activation= tf.keras.activations.relu) )
model_test3.add( keras.layers.Conv2D( filters = 256, kernel_size=(3,3),strides = (1,1), padding='same', activation= tf.keras.activations.relu) )
model_test3.add( keras.layers.MaxPool2D( pool_size=(2,2),strides=(2,2) ))
model_test3.add( keras.layers.Flatten() )
model_test3.add( keras.layers.Dense( units=4096, activation= keras.activations.relu ) )
model_test3.add( keras.layers.Dropout(rate=0.5) )
model_test3.add( keras.layers.Dense( units=4096, activation= keras.activations.relu ) )
model_test3.add( keras.layers.Dropout(rate=0.5) )
model_test3.add( keras.layers.Dense(units=10 , activation= keras.activations.softmax ) )
model_test3.compile(optimizer = tf.train.AdamOptimizer(),loss = keras.losses.categorical_crossentropy, metrics=['accuracy'] )
model_test3.fit(train_x,train_y,batch_size=200,epochs=2)
保存
model_test3.save('my_model.h5')
'''
Currently `save` requires model to be a graph network. Consider using `save_weights`, in order to save the weights of the model.
'''
出现错误,意思是需要定义的模型是一个图网络结构,只能保存权重。其实错误原因在于我们的第一层没有定义输入的大小,尝试定义一波
model_test4 = keras.models.Sequential()
model_test4.add( keras.layers.Conv2D( filters = 64, kernel_size=(11,11),strides = (1,1), padding='valid', activation= tf.keras.activations.relu,input_shape=(28,28,1)) )
model_test4.add( keras.layers.MaxPool2D( pool_size=(2,2),strides=(2,2) ))
model_test4.add( keras.layers.Conv2D( filters = 192, kernel_size=(5,5),strides = (1,1), padding='same', activation= tf.keras.activations.relu) )
model_test4.add( keras.layers.MaxPool2D( pool_size=(2,2),strides=(2,2) ))
model_test4.add( keras.layers.Conv2D( filters = 384, kernel_size=(3,3),strides = (1,1), padding='same', activation= tf.keras.activations.relu) )
model_test4.add( keras.layers.Conv2D( filters = 384, kernel_size=(3,3),strides = (1,1), padding='same', activation= tf.keras.activations.relu) )
model_test4.add( keras.layers.Conv2D( filters = 256, kernel_size=(3,3),strides = (1,1), padding='same', activation= tf.keras.activations.relu) )
model_test4.add( keras.layers.MaxPool2D( pool_size=(2,2),strides=(2,2) ))
model_test4.add( keras.layers.Flatten() )
model_test4.add( keras.layers.Dense( units=4096, activation= keras.activations.relu ) )
model_test4.add( keras.layers.Dropout(rate=0.5) )
model_test4.add( keras.layers.Dense( units=4096, activation= keras.activations.relu ) )
model_test4.add( keras.layers.Dropout(rate=0.5) )
model_test4.add( keras.layers.Dense(units=10 , activation= keras.activations.softmax ) )
model_test4.compile(optimizer = tf.train.AdamOptimizer(),loss = keras.losses.categorical_crossentropy, metrics=['accuracy'] )
model_test4.fit(train_x,train_y,batch_size=200,epochs=2)
'''
Epoch 1/2
60000/60000 [==============================] - 22s 364us/step - loss: 0.4112 - acc: 0.8556
Epoch 2/2
60000/60000 [==============================] - 21s 342us/step - loss: 0.0584 - acc: 0.9838
'''
尝试保存
model_test4.save('my_model.h5')
'''
WARNING:tensorflow:TensorFlow optimizers do not make it possible to access optimizer attributes or optimizer state after instantiation. As a result, we cannot save the optimizer as part of the model save file.You will have to compile your model again after loading it. Prefer using a Keras optimizer instead (see keras.io/optimizers).
'''
又出warning
,说是不能使用tensorflow
的优化器,要使用keras
自带的优化器,好吧,改
model_test5 = keras.models.Sequential()
model_test5.add( keras.layers.Conv2D( filters = 64, kernel_size=(11,11),strides = (1,1), padding='valid', activation= tf.keras.activations.relu,input_shape=(28,28,1)) )
model_test5.add( keras.layers.MaxPool2D( pool_size=(2,2),strides=(2,2) ))
model_test5.add( keras.layers.Conv2D( filters = 192, kernel_size=(5,5),strides = (1,1), padding='same', activation= tf.keras.activations.relu) )
model_test5.add( keras.layers.MaxPool2D( pool_size=(2,2),strides=(2,2) ))
model_test5.add( keras.layers.Conv2D( filters = 384, kernel_size=(3,3),strides = (1,1), padding='same', activation= tf.keras.activations.relu) )
model_test5.add( keras.layers.Conv2D( filters = 384, kernel_size=(3,3),strides = (1,1), padding='same', activation= tf.keras.activations.relu) )
model_test5.add( keras.layers.Conv2D( filters = 256, kernel_size=(3,3),strides = (1,1), padding='same', activation= tf.keras.activations.relu) )
model_test5.add( keras.layers.MaxPool2D( pool_size=(2,2),strides=(2,2) ))
model_test5.add( keras.layers.Flatten() )
model_test5.add( keras.layers.Dense( units=4096, activation= keras.activations.relu ) )
model_test5.add( keras.layers.Dropout(rate=0.5) )
model_test5.add( keras.layers.Dense( units=4096, activation= keras.activations.relu ) )
model_test5.add( keras.layers.Dropout(rate=0.5) )
model_test5.add( keras.layers.Dense(units=10 , activation= keras.activations.softmax ) )
model_test5.compile(optimizer = tf.keras.optimizers.Adam(),loss = keras.losses.categorical_crossentropy, metrics=['accuracy'] )
model_test5.fit(train_x,train_y,batch_size=200,epochs=2)
model_test5.save("my_model2.h5")
'''
Epoch 1/2
60000/60000 [==============================] - 26s 434us/step - loss: 0.2850 - acc: 0.9043
Epoch 2/2
60000/60000 [==============================] - 25s 409us/step - loss: 0.0555 - acc: 0.9847
'''
这样就不出错了,尝试调用模型和参数,因为保存了模型结构和参数,所以不需要重新定义网络结构
model_test6= keras.models.load_model("my_model2.h5")
model_test6.summary()
'''
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
conv2d_10 (Conv2D) (None, 18, 18, 64) 7808
_________________________________________________________________
max_pooling2d_6 (MaxPooling2 (None, 9, 9, 64) 0
_________________________________________________________________
conv2d_11 (Conv2D) (None, 9, 9, 192) 307392
_________________________________________________________________
max_pooling2d_7 (MaxPooling2 (None, 4, 4, 192) 0
_________________________________________________________________
conv2d_12 (Conv2D) (None, 4, 4, 384) 663936
_________________________________________________________________
conv2d_13 (Conv2D) (None, 4, 4, 384) 1327488
_________________________________________________________________
conv2d_14 (Conv2D) (None, 4, 4, 256) 884992
_________________________________________________________________
max_pooling2d_8 (MaxPooling2 (None, 2, 2, 256) 0
_________________________________________________________________
flatten_2 (Flatten) (None, 1024) 0
_________________________________________________________________
dense_6 (Dense) (None, 4096) 4198400
_________________________________________________________________
dropout_4 (Dropout) (None, 4096) 0
_________________________________________________________________
dense_7 (Dense) (None, 4096) 16781312
_________________________________________________________________
dropout_5 (Dropout) (None, 4096) 0
_________________________________________________________________
dense_8 (Dense) (None, 10) 40970
=================================================================
Total params: 24,212,298
Trainable params: 24,212,298
Non-trainable params: 0
_________________________________________________________________
'''
稳如狗,做一下测试
model_test6.evaluate(test_x,test_y)
'''
10000/10000 [==============================] - 3s 302us/step
[0.05095283883444499, 0.9868]
'''
test_img= test_x[5000,...]
test_img=test_img[ np.newaxis,...]
pred_label = model_test6.predict_classes(test_img)
plt.figure()
plt.imshow( np.squeeze(test_img))
plt.title(pred_label)
这一章主要学习了如何搭建简单的卷积网络,以及集中保存方法:仅权重以及权重和模型结构。
主要记住的就是如果仅保存权重,注意用tensorflow
自带的优化器,而保存网络和权重的时候要用keras
的优化器
下一章针对深度学习的几个理论做一下理解以及实验,包括BatchNorm
、ResNet
等。
博客代码链接:戳这里