【PyTorch 攻略(5/7)】训练和模型

【PyTorch 攻略(5/7)】训练和模型_第1张图片

  一、说明      

        训练模型是一个迭代过程。每次迭代称为纪元。该模型对输出进行猜测,计算其猜测中的误差(损失),收集误差相对于其参数的导数,并使用梯度下降优化这些参数。

我们从这里加载前面的代码。

%matplotlib inline
import torch
from torch import nn
from torch.utils.data import DataLoader
from torchvision import datasets
from torchvision.transforms import ToTensor, Lambda

training_data = datasets.FashionMNIST(
    root="data",
    train=True,
    download=True,
    transform=ToTensor()
)

test_data = datasets.FashionMNIST(
    root="data",
    train=False,
    download=True,
    transform=ToTensor()
)

train_dataloader = DataLoader(training_data, batch_size=64)
test_dataloader = DataLoader(test_data, batch_size=64)

class NeuralNetwork(nn.Module):
    def __init__(self):
        super(NeuralNetwork, self).__init__()
        self.flatten = nn.Flatten()
        self.linear_relu_stack = nn.Sequential(
            nn.Linear(28*28, 512),
            nn.ReLU(),
            nn.Linear(512, 512),
            nn.ReLU(),
            nn.Linear(512, 10),
            nn.ReLU()
        )

    def forward(self, x):
        x = self.flatten(x)
        logits = self.linear_relu_stack(x)
        return logits

model = NeuralNetwork()

二、设置超参数

        超参数是可调整的参数,可让您控制模型优化过程。不同的超参数值会影响模型训练和准确性水平。

        我们为训练定义了以下超参数:

  • 周期数:整个训练数据集通过网络的次数
  • 批大小:模型在每个纪元中看到的数据样本数。迭代次数是完成一个纪元所需的批次数。
  • 学习率:模型在搜索将产生更高模型精度的最佳权重时匹配的步长大小。较小的值意味着模型将需要更长的时间才能找到最佳权重;而较大的值可能会导致模型跳过并错过最佳权重,从而在训练期间产生不可预测的行为。
learning_rate = 1e-3
batch_size = 64
epochs = 5

        一旦我们设置了超参数,我们就可以使用优化循环来训练和优化我们的模型。优化循环的每次迭代称为一个纪元,它由两个主要部分组成:

  • 训练循环:迭代训练数据集并尝试收敛到最佳参数。
  • 验证/测试循环:循环访问测试数据集以检查模型性能是否在提高。

三、添加损失函数

        当呈现一些训练数据时,我们未经训练的网络可能不会给出正确的答案。损失函数测量获得的结果与目标值的差异程度,这是我们希望在训练过程中最小化的损失函数。为了计算损失,我们使用给定数据样本的输入进行预测,并将其与真实数据标签值进行比较。

        常见的损失函数包括:

  • nn.用于回归任务的 MSELoss(均方误差)
  • nn.用于分类任务的NLLLoss(负对数似然)
  • nn.CrossEntropyLoss 组合 nn。LogSoftmax 和 nn.NLLLoss

        我们将模型的输出对数传递给 nn。CrossEntropyLoss,它将规范化对数并计算预测误差。

# Initialize the loss function
loss_fn = nn.CrossEntropyLoss()

四、设置优化器

        优化是调整模型参数以减少每个训练步骤中的模型误差的过程。优化算法定义了此过程的执行方式。所有优化逻辑都封装在优化器对象中。在这里,我们使用随机梯度下降 (SGD) 优化器;此外,PyTorch 中还有许多不同的优化器,例如 ADAM 和 RMSProp,它们更适合不同类型的模型和数据。

        我们通过注册需要训练的模型参数并传入学习率超参数来初始化优化器。

optimizer = torch.optim.SGD(model.parameters(), lr=learning_rate)

在训练循环中,优化分三个步骤进行:

  • 调用 optimizer.zero_grad() 重置模型参数的梯度。默认情况下,渐变相加;为了防止重复计算,我们在每次迭代时都明确将其归零。
  • 通过调用 loss.backwards() 反向传播预测损失。PyTorch 存储每个参数的损失梯度。
  • 在获得梯度后调用 optimizer.step(),以便通过反向传递中收集的梯度来调整参数。

五、全面实施

我们定义了循环优化代码的train_loop,以及根据测试数据评估模型性能的test_loop

def train_loop(dataloader, model, loss_fn, optimizer):
    size = len(dataloader.dataset)
    for batch, (X, y) in enumerate(dataloader):        
        # Compute prediction and loss
        pred = model(X)
        loss = loss_fn(pred, y)
        
        # Backpropagation
        optimizer.zero_grad()
        loss.backward()
        optimizer.step()

        if batch % 100 == 0:
            loss, current = loss.item(), batch * len(X)
            print(f"loss: {loss:>7f}  [{current:>5d}/{size:>5d}]")


def test_loop(dataloader, model, loss_fn):
    size = len(dataloader.dataset)
    test_loss, correct = 0, 0

    with torch.no_grad():
        for X, y in dataloader:
            pred = model(X)
            test_loss += loss_fn(pred, y).item()
            correct += (pred.argmax(1) == y).type(torch.float).sum().item()
            
    test_loss /= size
    correct /= size
    print(f"Test Error: \n Accuracy: {(100*correct):>0.1f}%, Avg loss: {test_loss:>8f} \n")

        我们初始化损失函数和优化器,并将其传递给train_looptest_loop。随意增加周期数以跟踪模型的改进性能。

loss_fn = nn.CrossEntropyLoss()
optimizer = torch.optim.SGD(model.parameters(), lr=learning_rate)

epochs = 10
for t in range(epochs):
    print(f"Epoch {t+1}\n-------------------------------")
    train_loop(train_dataloader, model, loss_fn, optimizer)
    test_loop(test_dataloader, model, loss_fn)
print("Done!")
Epoch 1
-------------------------------
loss: 2.307260  [    0/60000]
loss: 2.305284  [ 6400/60000]
loss: 2.293966  [12800/60000]
loss: 2.291592  [19200/60000]
loss: 2.288022  [25600/60000]
loss: 2.259277  [32000/60000]
loss: 2.277950  [38400/60000]
loss: 2.252569  [44800/60000]
loss: 2.238333  [51200/60000]
loss: 2.239141  [57600/60000]
Test Error: 
 Accuracy: 27.5%, Avg loss: 0.035050 

Epoch 2
-------------------------------
loss: 2.222609  [    0/60000]
loss: 2.244805  [ 6400/60000]
loss: 2.209550  [12800/60000]
loss: 2.227453  [19200/60000]
loss: 2.217051  [25600/60000]
loss: 2.162092  [32000/60000]
loss: 2.206926  [38400/60000]
loss: 2.151579  [44800/60000]
loss: 2.117667  [51200/60000]
loss: 2.143689  [57600/60000]
Test Error: 
 Accuracy: 38.9%, Avg loss: 0.033368 

Epoch 3
-------------------------------
loss: 2.102783  [    0/60000]
loss: 2.154025  [ 6400/60000]
loss: 2.076486  [12800/60000]
loss: 2.124048  [19200/60000]
loss: 2.107713  [25600/60000]
loss: 2.014179  [32000/60000]
loss: 2.090220  [38400/60000]
loss: 1.989485  [44800/60000]
loss: 1.933911  [51200/60000]
loss: 2.002917  [57600/60000]
Test Error: 
 Accuracy: 41.2%, Avg loss: 0.030885 

Epoch 4
-------------------------------
loss: 1.926293  [    0/60000]
loss: 2.019496  [ 6400/60000]
loss: 1.888668  [12800/60000]
loss: 1.987653  [19200/60000]
loss: 1.968171  [25600/60000]
loss: 1.838344  [32000/60000]
loss: 1.951870  [38400/60000]
loss: 1.808960  [44800/60000]
loss: 1.749038  [51200/60000]
loss: 1.868777  [57600/60000]
Test Error: 
 Accuracy: 44.4%, Avg loss: 0.028537 

Epoch 5
-------------------------------
loss: 1.754023  [    0/60000]
loss: 1.889865  [ 6400/60000]
loss: 1.724985  [12800/60000]
loss: 1.880932  [19200/60000]
loss: 1.852289  [25600/60000]
loss: 1.703095  [32000/60000]
loss: 1.850078  [38400/60000]
loss: 1.679640  [44800/60000]
loss: 1.618462  [51200/60000]
loss: 1.781099  [57600/60000]
Test Error: 
 Accuracy: 46.4%, Avg loss: 0.026904 

Epoch 6
-------------------------------
loss: 1.629323  [    0/60000]
loss: 1.794621  [ 6400/60000]
loss: 1.609603  [12800/60000]
loss: 1.806047  [19200/60000]
loss: 1.771073  [25600/60000]
loss: 1.610854  [32000/60000]
loss: 1.782800  [38400/60000]
loss: 1.593032  [44800/60000]
loss: 1.530435  [51200/60000]
loss: 1.721836  [57600/60000]
Test Error: 
 Accuracy: 47.5%, Avg loss: 0.025738 

Epoch 7
-------------------------------
loss: 1.541017  [    0/60000]
loss: 1.723998  [ 6400/60000]
loss: 1.525540  [12800/60000]
loss: 1.745950  [19200/60000]
loss: 1.714844  [25600/60000]
loss: 1.542636  [32000/60000]
loss: 1.735072  [38400/60000]
loss: 1.529822  [44800/60000]
loss: 1.467118  [51200/60000]
loss: 1.675812  [57600/60000]
Test Error: 
 Accuracy: 48.3%, Avg loss: 0.024844 

Epoch 8
-------------------------------
loss: 1.474333  [    0/60000]
loss: 1.669000  [ 6400/60000]
loss: 1.460421  [12800/60000]
loss: 1.694097  [19200/60000]
loss: 1.674764  [25600/60000]
loss: 1.487773  [32000/60000]
loss: 1.699166  [38400/60000]
loss: 1.481064  [44800/60000]
loss: 1.419311  [51200/60000]
loss: 1.638599  [57600/60000]
Test Error: 
 Accuracy: 48.7%, Avg loss: 0.024137 

Epoch 9
-------------------------------
loss: 1.420322  [    0/60000]
loss: 1.625176  [ 6400/60000]
loss: 1.408073  [12800/60000]
loss: 1.649715  [19200/60000]
loss: 1.644693  [25600/60000]
loss: 1.443653  [32000/60000]
loss: 1.671596  [38400/60000]
loss: 1.443777  [44800/60000]
loss: 1.382555  [51200/60000]
loss: 1.608089  [57600/60000]
Test Error: 
 Accuracy: 49.1%, Avg loss: 0.023570 

Epoch 10
-------------------------------
loss: 1.375013  [    0/60000]
loss: 1.588062  [ 6400/60000]
loss: 1.364595  [12800/60000]
loss: 1.612044  [19200/60000]
loss: 1.621220  [25600/60000]
loss: 1.407904  [32000/60000]
loss: 1.649211  [38400/60000]
loss: 1.415225  [44800/60000]
loss: 1.353849  [51200/60000]
loss: 1.582835  [57600/60000]
Test Error: 
 Accuracy: 49.5%, Avg loss: 0.023104 

Done!

        您可能已经注意到该模型最初不是很好(没关系!尝试运行循环以获取更多纪元或将learning_rate调整为更大的数字。也可能是我们选择的模型配置可能不是此类问题的最佳配置。

六、保存模型

        当您对模型的性能感到满意时,可以使用torch.save来保存它。PyTorch 模型将学习到的参数存储在称为 state_dict 的内部状态字典中。这些可以使用 torch.save 方法持久化。

torch.save(model.state_dict(), "data/model.pth")
print("Saved PyTorch Model State to model.pth")

你可能感兴趣的:(深度学习,人工智能,pytorch,pytorch,人工智能,python)