CIFAR10 是由 Hinton 的两个大弟子 Alex Krizhevsky、Ilya Sutskever 收集的一个用于普适物体识别的数据集。本文将利用 PyTorch 建立一个卷积神经网络模型对 CIFAR10 中的数据集进行分类和识别。
CIFAR-10 数据集由 10 个类的 60000 个 32x32 的彩色图像组成,即每个类有 6000 个图像。数据如下所示:
从上图可以看到,这 10 个类分别是:飞机、汽车、鸟、猫、鹿、狗、青蛙、马、船和卡车。每个类存在 6000 张图像(其中5000 在训练集中,1000 在测试集中)。即训练集中的图像总数为 5000×10=50000 张,测试数据集共有图像 10000 张。让我们先定义出这些类别的名字:
classes = ('plane', 'car', 'bird', 'cat',
'deer', 'dog', 'frog', 'horse', 'ship', 'truck')
len(classes)
我们的任务就是训练出一个较好的模型,使其能够对任意一张图像进行识别。换句话说,我们希望得到的模型为:将任意一张图像放入该模型中,该模型能够准确输出该图像所属的类别。
首先,让我们来下载该数据集合。由于该数据集过大,如果线上直接下载该数据集的话,速度会很慢。因此,我们先将数据集上传到了蓝桥云课的云服务器中,我们直接从这上面下载 CIFAR-10 数据集。
import torch
import torchvision
import torchvision.transforms as transforms
import numpy as np
# 定义预处理列表
transform = transforms.Compose(
[transforms.ToTensor(),
transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))])
# CIFAR10: 60000 张 32x32 大小的彩色图片,这些图片共分 10 类,每类有 6000 张图像
# root:指定数据集所在位置
# train=True:表示若本地已经存在,无需下载。若不存在,则下载
# transform:预处理列表,这样就可以返回预处理后的数据集合
train_dataset = torchvision.datasets.CIFAR10(root='./', train=True,
download=True, transform=transform)
test_dataset = torchvision.datasets.CIFAR10(root='./', train=False,
download=True, transform=transform)
print("训练集的图像数量为:", len(train_dataset))
print("测试集的图像数量为", len(test_dataset))
输出结果如下:
Files already downloaded and verified
Files already downloaded and verified
训练集的图像数量为: 50000
测试集的图像数量为 10000
batch_size = 4 # 设置批次个数
# shuffle=True:表示加载数据前,会先打乱数据,提高模型的稳健性
train_loader = torch.utils.data.DataLoader(train_dataset, batch_size=batch_size,
shuffle=True)
test_loader = torch.utils.data.DataLoader(test_dataset, batch_size=batch_size,
shuffle=False)
test_loader, test_loader
<torch.utils.data.dataloader.DataLoader object at 0x0000022A8C87AF88>
<torch.utils.data.dataloader.DataLoader object at 0x0000022A8CB63808>
import matplotlib.pyplot as plt
%matplotlib inline
def imshow(img):
# 由于加载器产生的图片是归一化后的图片,因此这里需要将图片反归一化
# 变成归一化前的图像
img = img / 2 + 0.5
# 将图像从 Tensor 转为 NumPy
npimg = img.numpy()
# 产生的数据为 C×W×H 而 plt 展示的图像一般都是 W×H×C
# 因此,这里会有一个维度的变换
plt.imshow(np.transpose(npimg, (1, 2, 0)))
plt.show()
# 随机获得一些训练图像
dataiter = iter(train_loader)
images, labels = dataiter.next()
# 将这些图像进行展示
imshow(torchvision.utils.make_grid(images))
import torch.nn.functional as F
import torch.nn as nn
# 网络模型的建立
class ConvNet(nn.Module):
def __init__(self):
super(ConvNet, self).__init__()
# 神经网络的输入为 三个通道
self.conv1 = nn.Conv2d(3, 6, 5)
self.pool = nn.MaxPool2d(2, 2)
self.conv2 = nn.Conv2d(6, 16, 5)
self.fc1 = nn.Linear(16 * 5 * 5, 120)
self.fc2 = nn.Linear(120, 84)
# 由于一共有 10 个类,因此模型的输出节点数量为 10
self.fc3 = nn.Linear(84, 10)
def forward(self, x):
# -> n, 3, 32, 32
# 传入数据,且为输出数据添加激活函数
x = self.pool(F.relu(self.conv1(x))) # -> n, 6, 14, 14
x = self.pool(F.relu(self.conv2(x))) # -> n, 16, 5, 5
x = x.view(-1, 16 * 5 * 5) # -> n, 400
x = F.relu(self.fc1(x)) # -> n, 120
x = F.relu(self.fc2(x)) # -> n, 84
x = self.fc3(x) # -> n, 10
return x
# 定义当前设备是否支持 GPU
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
model = ConvNet().to(device)
model
输出结果如下:
ConvNet(
(conv1): Conv2d(3, 6, kernel_size=(5, 5), stride=(1, 1))
(pool): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
(conv2): Conv2d(6, 16, kernel_size=(5, 5), stride=(1, 1))
(fc1): Linear(in_features=400, out_features=120, bias=True)
(fc2): Linear(in_features=120, out_features=84, bias=True)
(fc3): Linear(in_features=84, out_features=10, bias=True)
)
learning_rate = 0.001
criterion = nn.CrossEntropyLoss()
optimizer = torch.optim.SGD(model.parameters(), lr=learning_rate)
criterion, optimizer
输出结果如下:
CrossEntropyLoss() SGD (
Parameter Group 0
dampening: 0
lr: 0.001
maximize: False
momentum: 0
nesterov: False
weight_decay: 0
)
num_epochs = 5
# 定义数据长度
n_total_steps = len(train_loader)
print("Start training....")
for epoch in range(num_epochs):
for i, (images, labels) in enumerate(train_loader):
# 原始数据集的大小,每个批次的大小为: [4, 3, 32, 32]
# 将数据转为模型支持的环境类型。
images = images.to(device)
labels = labels.to(device)
# 模型的正向传播,得到数据数据的预测值
outputs = model(images)
# 根据预测值计算损失
loss = criterion(outputs, labels)
# 固定步骤:梯度清空、反向传播、参数更新
optimizer.zero_grad()
loss.backward()
optimizer.step()
if (i+1) % 2000 == 0:
print(
f'Epoch [{epoch+1}/{num_epochs}], Step [{i+1}/{n_total_steps}], Loss: {loss.item():.4f}')
print('Finished Training')
输出结果如下:
Start training....
Epoch [1/5], Step [2000/12500], Loss: 2.3119
Epoch [1/5], Step [4000/12500], Loss: 2.2913
Epoch [1/5], Step [6000/12500], Loss: 2.2765
Epoch [1/5], Step [8000/12500], Loss: 2.2987
Epoch [1/5], Step [10000/12500], Loss: 2.1554
Epoch [1/5], Step [12000/12500], Loss: 1.9108
Epoch [2/5], Step [2000/12500], Loss: 2.2335
Epoch [2/5], Step [4000/12500], Loss: 2.5870
Epoch [2/5], Step [6000/12500], Loss: 1.8595
Epoch [2/5], Step [8000/12500], Loss: 2.4813
Epoch [2/5], Step [10000/12500], Loss: 1.4165
Epoch [2/5], Step [12000/12500], Loss: 1.2263
Epoch [3/5], Step [2000/12500], Loss: 2.2872
Epoch [3/5], Step [4000/12500], Loss: 1.0829
Epoch [3/5], Step [6000/12500], Loss: 1.4678
Epoch [3/5], Step [8000/12500], Loss: 1.8610
Epoch [3/5], Step [10000/12500], Loss: 2.0595
Epoch [3/5], Step [12000/12500], Loss: 1.1407
Epoch [4/5], Step [2000/12500], Loss: 0.9594
Epoch [4/5], Step [4000/12500], Loss: 1.6792
Epoch [4/5], Step [6000/12500], Loss: 1.7309
Epoch [4/5], Step [8000/12500], Loss: 2.1542
Epoch [4/5], Step [10000/12500], Loss: 1.4059
Epoch [4/5], Step [12000/12500], Loss: 0.8401
Epoch [5/5], Step [2000/12500], Loss: 1.2933
Epoch [5/5], Step [4000/12500], Loss: 2.3831
Epoch [5/5], Step [6000/12500], Loss: 0.9179
Epoch [5/5], Step [8000/12500], Loss: 2.2616
Epoch [5/5], Step [10000/12500], Loss: 1.4560
Epoch [5/5], Step [12000/12500], Loss: 1.7991
Finished Training
PATH = './cnn.pth'
torch.save(model.state_dict(), PATH)
print("The model have been saved!")
new_model = ConvNet()
new_model.load_state_dict(torch.load(PATH))
with torch.no_grad():
# 统计预测正确的图像数量和进行了预测的图像数量
n_correct = 0
n_samples = 0
# 统计每类图像中,预测正确的图像数量和该类图像的实际数量
n_class_correct = [0 for i in range(10)]
n_class_samples = [0 for i in range(10)]
for images, labels in test_loader:
images = images.to(device)
labels = labels.to(device)
outputs = new_model(images)
# 利用 max 函数返回 10 个类别中概率最大的下标,即预测的类别
_, predicted = torch.max(outputs, 1)
n_samples += labels.size(0)
# 通过判断预测值和真实标签是否相同,来统计预测正确的样本数
n_correct += (predicted == labels).sum().item()
# 计算每种种类的预测正确数
for i in range(batch_size):
label = labels[i]
pred = predicted[i]
if (label == pred):
n_class_correct[label] += 1
n_class_samples[label] += 1
# 输出总的模型准确率
acc = 100.0 * n_correct / n_samples
print(f'Accuracy of the network: {acc} %')
# 输出每个类别的模型准确率
for i in range(10):
acc = 100.0 * n_class_correct[i] / n_class_samples[i]
print(f'Accuracy of {classes[i]}: {acc} %')
输出结果如下:
The model have been saved!
Accuracy of the network: 48.39 %
Accuracy of plane: 50.7 %
Accuracy of car: 84.4 %
Accuracy of bird: 20.0 %
Accuracy of cat: 22.8 %
Accuracy of deer: 50.2 %
Accuracy of dog: 52.3 %
Accuracy of frog: 60.8 %
Accuracy of horse: 52.1 %
Accuracy of ship: 54.2 %
Accuracy of truck: 36.4 %
Process finished with exit code 0