[Chatgpt提效实战]基于ResNet50的MNIST手写数字识别

prompt

请你帮我写一个pytorch框架下的图像分类模型的训练代码,使用pytorch中的resnet50预训练模型作为模型主体,使用pytorch中的mnist数据集作为训练数据,使用crossentropy作为loss函数,并且用acc作为评价指标。请将batchsize设置为32,epoch数设置为10,使用adam优化器进行优化。最后请画出训练过程的acc曲线以及loss曲线。用tqdm显示每个epoch中的iterations进度,并且在gpu上进行训练。并且在transformer中请注意对图像特征图深度的适配。

支持21、22年,期间发行的pytorch版本。但是准确率只能到0.8多。
matplotlib如果报错可以百度解决,最近可能会有更新。

import torch
import torchvision
import torchvision.transforms as transforms
import torch.nn as nn
import torch.optim as optim
import matplotlib.pyplot as plt
from tqdm import tqdm

device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
print(device)
# Define transformations for the dataset
transform = transforms.Compose([
    transforms.ToTensor(),
    transforms.Lambda(lambda x: torch.cat([x, x, x], 0)), # 将灰度图像转换为三通道图像
    transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))
])

# Load the MNIST dataset
trainset = torchvision.datasets.MNIST(root='./data', train=True,
                                        download=True, transform=transform)
trainloader = torch.utils.data.DataLoader(trainset, batch_size=32,
                                          shuffle=True, num_workers=0)

# Load the pre-trained ResNet50 model
model = torchvision.models.resnet50(pretrained=True)

# Freeze all the layers except the final fully-connected layer
for param in model.parameters():
    param.requires_grad = False
model.fc.requires_grad = True

# Replace the final fully-connected layer with a new one that outputs 10 classes
num_ftrs = model.fc.in_features
model.fc = nn.Linear(num_ftrs, 10)


model = model.to(device)
# Define the loss function and optimizer
criterion = nn.CrossEntropyLoss()
optimizer = optim.Adam(model.fc.parameters(), lr=0.01)

# Train the model
num_epochs = 10
train_loss = []
train_acc = []
for epoch in range(num_epochs):
    running_loss = 0.0
    running_corrects = 0
    with tqdm(total=len(trainloader), desc=f'Epoch {epoch + 1}/{num_epochs}', unit='batch', dynamic_ncols=True) as t:
        for i, data in enumerate(trainloader, 0):
            # Get the inputs
            inputs, labels = data[0].to(device), data[1].to(device)

            # Zero the parameter gradients
            optimizer.zero_grad()

            # Forward pass
            outputs = model(inputs)
            loss = criterion(outputs, labels)

            # Backward pass and optimize
            loss.backward()
            optimizer.step()

            # Update statistics
            _, preds = torch.max(outputs, 1)
            running_loss += loss.item() * inputs.size(0)
            running_corrects += torch.sum(preds == labels.data)

            # Update tqdm
            t.update()

    # Calculate statistics for the epoch
    epoch_loss = running_loss / len(trainset)
    epoch_acc = (running_corrects.double() / len(trainset)).cpu()

    # Print statistics for the epoch
    print('Epoch [{}/{}], Loss: {:.4f}, Acc: {:.4f}'
          .format(epoch+1, num_epochs, epoch_loss, epoch_acc))

    # Save statistics for the epoch
    train_loss.append(epoch_loss)
    train_acc.append(epoch_acc)

# Plot the training loss and accuracy curves
plt.figure()
plt.plot(train_loss)
plt.title('Training Loss')
plt.xlabel('Epoch')
plt.ylabel('Loss')
plt.savefig('train_loss.png')

plt.figure()
plt.plot(train_acc)
plt.title('Training Accuracy')
plt.xlabel('Epoch')
plt.ylabel('Accuracy')
plt.savefig('train_acc.png')

你可能感兴趣的:(chatgpt,深度学习,python)