pytorch1.7教程实验——分类器训练

近来想大致总结一下自己知识学习的脉络,发现自己除了大量的工程经验外,对模型算法的研究还是不够深入,而且大多都是关于目标检测方向,锚框或非锚框以及transformer,其他的涉猎不足,认识不够清晰。而且目标检测网络现在要自己单独构建写一个出来,发现也是太难,除了普通的特征提取,检测头模块包含的东西实在太多,自己尚无力解决,于是这种从底层认识挖掘的想法暂时搁浅了。

在摸索的过程中,是基于pytorch,所以了解到了一些基本的网络构建知识,都是一些小网络,小数据集,很适合去认识基本的网络构成,于是根据pytorch1.7的官方教程,自己跑了几个案例,这里把完整代码贴出,以供参考和后续学习。

pytorch1.7官方教程网址:https://pytorch.apachecn.org/

第一个案例是训练分类器,即做图像分类:
我发现从分类器开始学起对学习深度学习的复杂网络是有帮助的,因为分类网络功能简单,结构也简单,但却可看成是构建其他网络的基础,因为其他网络功能更为复杂,就是在这基础网络的结构上进一步添加函数功能实现的。

在跑代码前重点强调一下,需要先配置环境,主要安装是pytorch相关的库,还要确保可以调用GPU。

数据集使用cifar10,可直接在线加载,数据量也不大,下面是我可视化后的内部图片:
pytorch1.7教程实验——分类器训练_第1张图片
先把完整的可跑代码附上:

# -*- coding: utf-8 -*-
import torch
import torchvision
import torchvision.transforms as transforms

import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim


import matplotlib.pyplot as plt
import numpy as np

# functions to show an image
def imshow(img):
    img = img / 2 + 0.5     # unnormalize
    npimg = img.numpy()
    plt.imshow(np.transpose(npimg, (1, 2, 0)))
    plt.show()

# 图像处理,转化为张量并做归一化
transform = transforms.Compose(
    [transforms.ToTensor(),
     transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))])

batch_size = 4

trainset = torchvision.datasets.CIFAR10(root='./data', train=True,
                                        download=True, transform=transform)
trainloader = torch.utils.data.DataLoader(trainset, batch_size=batch_size,
                                          shuffle=True, num_workers=0)

testset = torchvision.datasets.CIFAR10(root='./data', train=False,
                                       download=True, transform=transform)
testloader = torch.utils.data.DataLoader(testset, batch_size=batch_size,
                                         shuffle=False, num_workers=0)

classes = ('plane', 'car', 'bird', 'cat',
           'deer', 'dog', 'frog', 'horse', 'ship', 'truck')

# get some random training images用来显示训练图片
dataiter = iter(trainloader)
images, labels = dataiter.next()

# show images
imshow(torchvision.utils.make_grid(images))
# print labels
print(' '.join(f'{classes[labels[j]]:5s}' for j in range(batch_size)))
# 网络构建
class Net(nn.Module):
    def __init__(self):
        super().__init__()
        self.conv1 = nn.Conv2d(3, 6, 5)
        self.pool = nn.MaxPool2d(2, 2)
        self.conv2 = nn.Conv2d(6, 16, 5)
        self.fc1 = nn.Linear(16 * 5 * 5, 120)
        self.fc2 = nn.Linear(120, 84)
        self.fc3 = nn.Linear(84, 10)

    def forward(self, x):
        x = self.pool(F.relu(self.conv1(x)))
        x = self.pool(F.relu(self.conv2(x)))
        x = torch.flatten(x, 1) # flatten all dimensions except batch
        x = F.relu(self.fc1(x))
        x = F.relu(self.fc2(x))
        x = self.fc3(x)
        return x

net = Net()
# 优化器
criterion = nn.CrossEntropyLoss()
optimizer = optim.SGD(net.parameters(), lr=0.001, momentum=0.9)

# 训练迭代
for epoch in range(12):  # loop over the dataset multiple times

    running_loss = 0.0
    for i, data in enumerate(trainloader, 0):
        # get the inputs; data is a list of [inputs, labels]
        inputs, labels = data

        # zero the parameter gradients
        optimizer.zero_grad()

        # forward + backward + optimize
        outputs = net(inputs)
        loss = criterion(outputs, labels)
        loss.backward()
        optimizer.step()

        # print statistics
        running_loss += loss.item()
        if i % 2000 == 1999:    # print every 2000 mini-batches
            print(f'[{epoch + 1}, {i + 1:5d}] loss: {running_loss / 2000:.3f}')
            running_loss = 0.0

print('Finished Training')

PATH = './cifar_net.pth'
torch.save(net.state_dict(), PATH)
# 显示测试集图片
dataiter = iter(testloader)
images, labels = dataiter.next()
imshow(torchvision.utils.make_grid(images))
print('GroundTruth: ', ' '.join(f'{classes[labels[j]]:5s}' for j in range(4)))
# 前向传播测试网络
net = Net()
net.load_state_dict(torch.load(PATH))

outputs = net(images)


_, predicted = torch.max(outputs, 1)

print('Predicted: ', ' '.join(f'{classes[predicted[j]]:5s}'
                              for j in range(4)))

correct = 0
total = 0
# since we're not training, we don't need to calculate the gradients for our outputs
# 测试集前向过程不计算梯度
with torch.no_grad():
    for data in testloader:
        images, labels = data
        # calculate outputs by running images through the network
        outputs = net(images)
        # the class with the highest energy is what we choose as prediction
        _, predicted = torch.max(outputs.data, 1)
        total += labels.size(0)
        correct += (predicted == labels).sum().item()

print(f'Accuracy of the network on the 10000 test images: {100 * correct // total} %')


# prepare to count predictions for each class
correct_pred = {classname: 0 for classname in classes}
total_pred = {classname: 0 for classname in classes}

# again no gradients needed
with torch.no_grad():
    for data in testloader:
        images, labels = data
        outputs = net(images)
        _, predictions = torch.max(outputs, 1)
        # collect the correct predictions for each class
        for label, prediction in zip(labels, predictions):
            if label == prediction:
                correct_pred[classes[label]] += 1
            total_pred[classes[label]] += 1


# print accuracy for each class
for classname, correct_count in correct_pred.items():
    accuracy = 100 * float(correct_count) / total_pred[classname]
    print(f'Accuracy for class: {classname:5s} is {accuracy:.1f} %')

device = torch.device('cuda:0' if torch.cuda.is_available() else 'cpu')
# Assuming that we are on a CUDA machine, this should print a CUDA device:
print(device)


训练过程如下(使用了特殊的命令行软件):

(pytorch) λ python fenlei.py
Files already downloaded and verified
Files already downloaded and verified

这时调用了数据集图片显示函数,结果如下:
pytorch1.7教程实验——分类器训练_第2张图片

horse deer  cat   cat
[1,  2000] loss: 2.172
[1,  4000] loss: 1.906
[1,  6000] loss: 1.705
[1,  8000] loss: 1.613
[1, 10000] loss: 1.555
[1, 12000] loss: 1.521
[2,  2000] loss: 1.436
[2,  4000] loss: 1.391
[2,  6000] loss: 1.388
[2,  8000] loss: 1.356
[2, 10000] loss: 1.338
[2, 12000] loss: 1.301
[3,  2000] loss: 1.252
[3,  4000] loss: 1.222
[3,  6000] loss: 1.219
[3,  8000] loss: 1.216
[3, 10000] loss: 1.211
[3, 12000] loss: 1.189
[4,  2000] loss: 1.113
[4,  4000] loss: 1.124
[4,  6000] loss: 1.123
[4,  8000] loss: 1.125
[4, 10000] loss: 1.117
[4, 12000] loss: 1.104
[5,  2000] loss: 1.049
[5,  4000] loss: 1.037
[5,  6000] loss: 1.027
[5,  8000] loss: 1.033
[5, 10000] loss: 1.062
[5, 12000] loss: 1.056
[6,  2000] loss: 0.942
[6,  4000] loss: 0.989
[6,  6000] loss: 0.994
[6,  8000] loss: 1.004
[6, 10000] loss: 0.996
[6, 12000] loss: 0.992
[7,  2000] loss: 0.903
[7,  4000] loss: 0.931
[7,  6000] loss: 0.950
[7,  8000] loss: 0.954
[7, 10000] loss: 0.945
[7, 12000] loss: 0.956
[8,  2000] loss: 0.850
[8,  4000] loss: 0.882
[8,  6000] loss: 0.907
[8,  8000] loss: 0.913
[8, 10000] loss: 0.919
[8, 12000] loss: 0.911
[9,  2000] loss: 0.820
[9,  4000] loss: 0.817
[9,  6000] loss: 0.880
[9,  8000] loss: 0.859
[9, 10000] loss: 0.897
[9, 12000] loss: 0.899
[10,  2000] loss: 0.782
[10,  4000] loss: 0.798
[10,  6000] loss: 0.839
[10,  8000] loss: 0.856
[10, 10000] loss: 0.854
[10, 12000] loss: 0.883
[11,  2000] loss: 0.750
[11,  4000] loss: 0.779
[11,  6000] loss: 0.800
[11,  8000] loss: 0.823
[11, 10000] loss: 0.846
[11, 12000] loss: 0.860
[12,  2000] loss: 0.729
[12,  4000] loss: 0.764
[12,  6000] loss: 0.780
[12,  8000] loss: 0.783
[12, 10000] loss: 0.804
[12, 12000] loss: 0.821
Finished Training

pytorch1.7教程实验——分类器训练_第3张图片

GroundTruth:  cat   ship  ship  plane
Predicted:  cat   car   truck plane
Accuracy of the network on the 10000 test images: 61 %
Accuracy for class: plane is 68.4 %
Accuracy for class: car   is 83.1 %
Accuracy for class: bird  is 48.1 %
Accuracy for class: cat   is 45.6 %
Accuracy for class: deer  is 59.8 %
Accuracy for class: dog   is 44.3 %
Accuracy for class: frog  is 71.6 %
Accuracy for class: horse is 62.3 %
Accuracy for class: ship  is 67.2 %
Accuracy for class: truck is 65.2 %
cuda:0

这就是一个官方的图像分类网络训练案例

包括数据集的加载
模型网络的构建
网络的训练和验证

内容简单,可以大概的了解深度学习网络的原理

你可能感兴趣的:(pytorch,python,人工智能,pytorch,深度学习,图像处理)