毕业设计练习_03——用VGG13实现cifar10识别

文章目录

  • cifar10数据集的介绍
  • VGG系列
  • 训练结果
  • 附录:实现代码

cifar10数据集的介绍

keras内置的数据集,提供了5万张32乘32像素点的十分类彩色图片和标签,用于训练,提供了1万张32乘32像素点的十分类彩色图片和标签,用于测试
十分类 :飞机 汽车 鸟 猫 鹿 狗 青蛙 马 船 卡车
本例相比于手写数字识别来说更加复杂,因为要处理的是彩色图片,而且所搭建的模型层数更多。另外cifar10图片内容需要大量细节才能呈现,而图片的分辨率仅为32*32,使得图片主题部分信息较为模糊,增加了训练的困难。

VGG系列

VGG系列模型没有较多的超参数,卷积用的是3×3的卷积核,padding为same padding,池化用的是maxpooling 2×2的,本例中选用VGG13搭建网络结构,由于有13层所有命名VGG13。
毕业设计练习_03——用VGG13实现cifar10识别_第1张图片

训练结果

在传统VGG13模型上训练,加入了批标准化(Batch Normalization)。为了防止过拟合,加入了正则化和Dropout。学习率设置为0.01.最大epoch设置为10。整个VGG13模型共有 9,951,306个参数。在没有使用GPU加速的情况下,训练时间共6个小时。在训练集上准确率可达到79%

Epoch 1/10
1563/1563 [==============================] - 2205s 1s/step - loss: 4.9723 - accuracy: 0.2371 - val_loss: 4.3091 - val_accuracy: 0.2422
Epoch 2/10
1563/1563 [==============================] - 1694s 1s/step - loss: 3.8546 - accuracy: 0.3743 - val_loss: 3.3998 - val_accuracy: 0.4014
Epoch 3/10
1563/1563 [==============================] - 1658s 1s/step - loss: 3.0042 - accuracy: 0.4886 - val_loss: 2.6592 - val_accuracy: 0.5553
Epoch 4/10
1563/1563 [==============================] - 1727s 1s/step - loss: 2.4650 - accuracy: 0.5831 - val_loss: 2.2082 - val_accuracy: 0.6284
Epoch 5/10
1563/1563 [==============================] - 1703s 1s/step - loss: 2.0819 - accuracy: 0.6465 - val_loss: 1.8913 - val_accuracy: 0.6851
Epoch 6/10
1563/1563 [==============================] - 1685s 1s/step - loss: 1.8014 - accuracy: 0.6905 - val_loss: 1.6811 - val_accuracy: 0.7128
Epoch 7/10
1563/1563 [==============================] - 1710s 1s/step - loss: 1.5866 - accuracy: 0.7243 - val_loss: 1.4982 - val_accuracy: 0.7429
Epoch 8/10
1563/1563 [==============================] - 1752s 1s/step - loss: 1.4303 - accuracy: 0.7519 - val_loss: 1.3706 - val_accuracy: 0.7590
Epoch 9/10
1563/1563 [==============================] - 1749s 1s/step - loss: 1.3087 - accuracy: 0.7769 - val_loss: 1.3319 - val_accuracy: 0.7607
Epoch 10/10
1563/1563 [==============================] - 1694s 1s/step - loss: 1.2249 - accuracy: 0.7907 - val_loss: 1.2954 - val_accuracy: 0.7611

数据可视化
毕业设计练习_03——用VGG13实现cifar10识别_第2张图片
毕业设计练习_03——用VGG13实现cifar10识别_第3张图片
用model.predict来预测前100个测试样本,发现有21个与标签不符合。

附录:实现代码

import相关模块

import keras
from keras.models import Sequential
from keras.layers import Dense,Activation,Dropout,Flatten
from keras.layers import Conv2D,MaxPooling2D,BatchNormalization
from keras.datasets import cifar10
from keras.optimizers import SGD
from keras import regularizers
import matplotlib.pyplot as plt

引述数据集,进行数据处理

(x_train,y_train),(x_test,y_test) = cifar10.load_data()
x_train = x_train.astype('float32') # 改变x输入数据的数据类型为float型
x_test = x_test.astype('float32')
y_train = keras.utils.to_categorical(y_train, 10) #改变标签的维度,从【50000,1】变为【50000,10】
y_test = keras.utils.to_categorical(y_test, 10)

搭建VGG13网络模型

model = Sequential()
model.add(Conv2D(64,(3,3),padding='same',input_shape=(32,32,3),kernel_regularizer=regularizers.l2(0.0005)))
model.add(Activation('relu'))
model.add(BatchNormalization())# 批标准化
model.add(Dropout(0.3))
model.add(Conv2D(64,(3,3),padding='same',kernel_regularizer=regularizers.l2(0.0005)))
model.add(Activation('relu'))
model.add(BatchNormalization())
model.add(MaxPooling2D(pool_size=(2,2)))

model.add(Conv2D(128,(3,3),padding='same',kernel_regularizer=regularizers.l2(0.0005)))
model.add(Activation('relu'))
model.add(BatchNormalization())
model.add(Dropout(0.4))
model.add(Conv2D(128,(3,3),padding='same',kernel_regularizer=regularizers.l2(0.0005)))
model.add(Activation('relu'))
model.add(BatchNormalization())
model.add(MaxPooling2D(pool_size=(2,2)))

model.add(Conv2D(256,(3,3),padding='same',kernel_regularizer=regularizers.l2(0.0005)))
model.add(Activation('relu'))
model.add(BatchNormalization())
model.add(Dropout(0.4))
model.add(Conv2D(256,(3,3),padding='same',kernel_regularizer=regularizers.l2(0.0003)))
model.add(Activation('relu'))
model.add(BatchNormalization())
model.add(MaxPooling2D(pool_size=(2,2)))

model.add(Conv2D(512,(3,3),padding='same',kernel_regularizer=regularizers.l2(0.0003)))
model.add(Activation('relu'))
model.add(BatchNormalization())
model.add(Dropout(0.4))
model.add(Conv2D(512,(3,3),padding='same',kernel_regularizer=regularizers.l2(0.0003)))
model.add(Activation('relu'))
model.add(BatchNormalization())
model.add(MaxPooling2D(pool_size=(2,2)))

model.add(Conv2D(512,(3,3),padding='same',kernel_regularizer=regularizers.l2(0.0003)))
model.add(Activation('relu'))
model.add(BatchNormalization())
model.add(Dropout(0.4))
model.add(Conv2D(512,(3,3),padding='same',kernel_regularizer=regularizers.l2(0.0003)))
model.add(Activation('relu'))
model.add(BatchNormalization())
model.add(MaxPooling2D(pool_size=(2,2)))

model.add(Flatten())
model.add(Dense(512,kernel_regularizer=regularizers.l2(0.0003)))
model.add(Activation('relu'))
model.add(BatchNormalization())

model.add(Dense(512, kernel_regularizer=regularizers.l2(0.0003)))
model.add(Activation('relu'))
model.add(BatchNormalization())

model.add(Dropout(0.5))
model.add(Dense(10))
model.add(Activation('softmax'))

配置训练方法

sgd = SGD(lr=0.01, decay=1e-6, momentum=0.9, nesterov=True)
model.compile(loss='categorical_crossentropy', optimizer=sgd, metrics=['accuracy'])

执行训练过程

history = model.fit(x_train, y_train, epochs=10, batch_size=32, validation_data=(x_test,y_test), validation_freq=1)

结果可视化

acc = history.history['accuracy']
val_acc = history.history['val_accuracy']
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs = range(len(acc))
plt.plot(epochs, acc, 'r', label='Training accuracy')
plt.plot(epochs, val_acc, 'g', label='Validation accuracy')
plt.title('Training and Validation accuracy')
plt.legend()
plt.figure()
plt.plot(epochs, loss, 'r', label='Training Loss')
plt.plot(epochs, val_loss, 'g', label='Validation Loss')
plt.title('Training and validation loss')
plt.legend()
plt.axis('on')
plt.show()

你可能感兴趣的:(机器学习,深度学习,神经网络,tensorflow)