Keras 实例教程(二)- mnist数据集

这次执行的任务和《基于Softmax实现手写数字识别》中所描述的基本一致,也就是设法对MINST数据集中的手写数字图片进行识别。如果你通过阅读之前的文章已经对当前问题有所了解,那么也应该知道这其实是一个分类任务,也就是将每张图片分入0~9这个十个类别中。当然首先还是导入各种所需的package以及数据集:

import os
os.environ['TF_CPP_MIN_LOG_LEVEL'] = '2'

import numpy as np
import random
import keras
import matplotlib.pyplot as plt

from keras.datasets import mnist
from keras.models import Sequential, Model
from keras.layers.core import Dense, Dropout, Activation
from keras.optimizers import RMSprop
from keras.utils import np_utils

(x_train, y_train), (x_test, y_test) = mnist.load_data()

print(x_train.shape, y_train.shape)

print(x_test.shape, y_test.shape)

上述程序的输出如下:
(60000, 28, 28) (60000,)
(10000, 28, 28) (10000,)
其中,60000是训练数据集中图像之数量,10000是测试数据集中图像之数量,28×28是图片的大小。

接下来,要对输入的数据做一些预处理。将每个二维的图像矩阵转换成一个一维的向量,然后再将像素值做归一化,也就是从0255的取值压缩到01之间。

x_train = x_train.reshape(x_train.shape[0], -1)
x_train = x_train.astype('float32')
x_test = x_test.reshape(x_test.shape[0], -1)
x_test = x_test.astype('float32')
# normalization to [0,1]
x_train /= 255
x_test /= 255

然后再对标签数据也做一些处理。下面这个函数的意思是把原来0~9这样的标签,变成长度为10的one-hot向量表示。

y_train = np_utils.to_categorical(y_train, num_classes=10)
y_test = np_utils.to_categorical(y_test, num_classes=10)

接下来我们便可以开始构建NN,第一步是定义模型:

model = Sequential()
model.add(Dense(512, input_shape=(784,)))
model.add(Activation('relu'))
model.add(Dropout(0.2))
model.add(Dense(512))
model.add(Activation('relu'))
model.add(Dropout(0.2))
model.add(Dense(10))
model.add(Activation('softmax'))

对于已经定义好的模型,你也可以用下面的语句来展示或者总结一下:

model.summary()

上述代码的输出结果如下:

_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
dense_1 (Dense)              (None, 512)               401920    
_________________________________________________________________
activation_1 (Activation)    (None, 512)               0         
_________________________________________________________________
dropout_1 (Dropout)          (None, 512)               0         
_________________________________________________________________
dense_2 (Dense)              (None, 512)               262656    
_________________________________________________________________
activation_2 (Activation)    (None, 512)               0         
_________________________________________________________________
dropout_2 (Dropout)          (None, 512)               0         
_________________________________________________________________
dense_3 (Dense)              (None, 10)                5130      
_________________________________________________________________
activation_3 (Activation)    (None, 10)                0         
=================================================================
Total params: 669,706
Trainable params: 669,706
Non-trainable params: 0

接下来就可以进入第二步,即Model compilation。同样跟之前用过的方法一致,其中metrics表示你希望Keras在优化过程中同时计算的一些量:

model.compile(loss = 'categorical_crossentropy',
             optimizer = 'rmsprop', #RMSprop()
             metrics=['accuracy'])

此外,之前也曾经讲过,你可以使用默认参数的优化器,那么你只需要传递一个字符串即可。而你也可以自定义优化器中的参数,例如:

rmsprop = RMSprop(lr=0.001, rho=0.9, epsilon=1e-08, decay=0.0)

# metrics means you want to get more results during the training process
model.compile(optimizer=rmsprop,
              loss='categorical_crossentropy',
              metrics=['accuracy'])

然后就可以进入模型训练阶段了。

# start training
history = model.fit(x_train, y_train, epochs=number_epochs, batch_size=batch_size,
                    verbose=1, validation_data=[x_test, y_test])

你可以看到模型的准确率已经达到了97%以上:

......
59008/60000 [============================>.] - ETA: 0s - loss: 0.1009 - acc: 0.9691
59520/60000 [============================>.] - ETA: 0s - loss: 0.1006 - acc: 0.9693
60000/60000 [==============================] - 6s 100us/step - loss: 0.1007 - acc: 0.9692 - val_loss: 0.0939 - val_acc: 0.9702

如果你想查看一下模型中设定的参数,可以使用:

history.params

输出如下:

{'batch_size': 128,
 'do_validation': True,
 'epochs': 10,
 'metrics': ['loss', 'acc', 'val_loss', 'val_acc'],
 'samples': 60000,
 'steps': None,
 'verbose': 1}

最后我们来做一下Evaluation and Prediction。下面是评估的部分:

# start evaluation
score_eval = model.evaluate(x_test, y_test, verbose=0)
print(model.metrics_names[0], ' : ', score_eval[0], model.metrics_names[1], ' : ', score_eval[1])

输出如下:

loss  :  0.09391305059320293 acc  :  0.9702

准确率为97.02%,其实这个值与在之前训练过程最终输出的val_acc是一致的。

下面来看预测的部分,为此先输出一张测试图像:

# select one for previewing
x_test_0 = x_test[0, :].reshape(1, 28*28)
y_test_0 = y_test[0, :]
plt.imshow(x_test_0.reshape([28, 28]))
plt.show()

输出的图像如下:


image

接下来看看模型预测的结果如何:

# start prediction
prediction = model.predict(x_test_0[:])
print('ground truth :', np.argmax(y_test_0), 'prediction : ',
      prediction[0], 'network prediction : ', np.argmax([prediction[0]]))

程序的输出如下,可见模型的label是7,经过softmax后的输出向量中7位置的概率也最大,因此神经网络也据此预测出了正确的结果为7。

result : 7 
prediction :  [3.5170558e-10 1.7722536e-08 3.8812636e-06 3.5927947e-06 8.3088009e-11 1.8278683e-09 8.7627998e-14 9.9999225e-01 1.3919356e-09 2.9425127e-07]
network prediction :  7

最后附上完整的代码:

import os
os.environ['TF_CPP_MIN_LOG_LEVEL'] = '2'

import numpy as np
import random
import keras
import matplotlib.pyplot as plt

from keras.datasets import mnist
from keras.models import Sequential, Model
from keras.layers.core import Dense, Dropout, Activation
from keras.optimizers import RMSprop
from keras.utils import np_utils

(x_train, y_train), (x_test, y_test) = mnist.load_data()

print(x_train.shape, y_train.shape)

print(x_test.shape, y_test.shape)

x_train = x_train.reshape(x_train.shape[0], -1)
x_train = x_train.astype('float32')
x_test = x_test.reshape(x_test.shape[0], -1)
x_test = x_test.astype('float32')

# normalization to [0,1]
x_train /= 255
x_test /= 255

# specify the class label
batch_size = 128
number_classes = 10
number_epochs = 2

# convert class vectors to binary class matrices for soft-max layer
y_train = np_utils.to_categorical(y_train, num_classes=number_classes)
y_test = np_utils.to_categorical(y_test, num_classes=number_classes)

print('y_train shape:', y_train.shape, 'y_test shape : ', y_test.shape)

# model definition
model = Sequential()

# hidden layer : 1st fully-connected layer
model.add(Dense(units=512, input_shape=(28*28,)))
model.add(Activation('relu'))
model.add(Dropout(0.2))

# hidden layer : 2nd fully connected layer
model.add(Dense(units=512))
model.add(Activation('relu'))
model.add(Dropout(0.2))

# output layer
model.add(Dense(units=10))
model.add(Activation('softmax'))

# print the model summary
model.summary()

# start compilation
model.compile(loss='categorical_crossentropy', optimizer='rmsprop', metrics=['accuracy'])

# start training
history = model.fit(x_train, y_train, epochs=number_epochs, batch_size=batch_size,
                    verbose=1, validation_data=[x_test, y_test])

# print the history
print('model params:', history.params)

# start evaluation
score_eval = model.evaluate(x_test, y_test, verbose=0)
print(model.metrics_names[0], ' : ', score_eval[0], model.metrics_names[1], ' : ', score_eval[1])

# select one for previewing
x_test_0 = x_test[0, :].reshape(1, 28*28)
y_test_0 = y_test[0, :]
plt.imshow(x_test_0.reshape([28, 28]))
plt.show()

# start prediction
prediction = model.predict(x_test_0[:])
print('ground truth :', np.argmax(y_test_0), 'prediction : ',
      prediction[0], 'network prediction : ', np.argmax([prediction[0]]))

你可能感兴趣的:(Keras 实例教程(二)- mnist数据集)