pytorch实现LeNet-MNIST数据集

LeNet实现

  • 代码
  • 运行结果

代码

#完整的训练模型LeNet

#导入库
import torch
import torchvision
from torchvision import transforms as tf
from torch.utils.data import DataLoader
from torch import nn, optim
from tensorboardX import SummaryWriter
from PIL import Image

writer = SummaryWriter('logs\\3')

#读取数据
transform=tf.ToTensor()
train_data = torchvision.datasets.MNIST(root='../dataset',train=True,transform=tf.ToTensor(),download=True)
test_data = torchvision.datasets.MNIST(root='../dataset',train=False,transform=tf.ToTensor(),download=True)

print('训练数据个数:%d,测试数据个数%d'%(len(train_data),len(test_data)))
batch_size = 64
train_iter = DataLoader(train_data,batch_size=batch_size,shuffle=True,num_workers=0)
test_iter = DataLoader(test_data,batch_size=batch_size,shuffle=True,num_workers=0)


#模型创建
class LeNet(nn.Module):
    def __init__(self):
        super(LeNet, self).__init__()
        self.model = nn.Sequential(
            nn.Conv2d(in_channels=1, out_channels=6, kernel_size=5, stride=1, padding=0),
            nn.ReLU(),
            nn.MaxPool2d(kernel_size=2, stride=2, padding=0),
            nn.Conv2d(in_channels=6, out_channels=16, kernel_size=5, stride=1, padding=0),
            nn.ReLU(),
            nn.MaxPool2d(kernel_size=2, stride=2, padding=0),
            nn.Flatten(),
            nn.Linear(in_features=256, out_features=120),
            nn.ReLU(),
            nn.Linear(in_features=120, out_features=84),
            nn.ReLU(),
            nn.Linear(in_features=84,out_features=32),
            nn.ReLU(),
            nn.Linear(in_features=32, out_features=10),
        )
    def forward(self,x):
        return self.model(x)

net = LeNet()
if torch.cuda.is_available():
    net = net.cuda() #使用gup计算要将模型、损失函数、数据调用cuda()函数
#损失函数
loss = nn.CrossEntropyLoss()
if torch.cuda.is_available():
    loss = loss.cuda()
#优化器
learning_rate = 0.1
optimer = optim.SGD(net.parameters(),lr=learning_rate)

#训练
epochs = 10
total_train_step = 0

for epoch in range(epochs):
    print('-'*10,'第%d轮训练开始'%(epoch+1),'-'*10)

    #训练
    net.train()
    for data in train_iter:
        img,label = data
        if torch.cuda.is_available():
            img , label = img.cuda() ,label.cuda()
        l = loss(net(img),label)
        # writer.add_scalar('train_loss',loss)

        #优化
        optimer.zero_grad()
        l.backward()
        optimer.step()
        writer.add_scalar('train_loss',l,1)
        total_train_step += 1
        if total_train_step%100 == 0:
            print('train_step:%d loss:%.6f'%(total_train_step,l.item()))

    # 测试
    net.eval()
    total_test_loss = 0
    total_test_acc = 0
    with torch.no_grad():
        for data in test_iter:
            img, label = data
            if torch.cuda.is_available():
                img , label = img.cuda() ,label.cuda()
            outputs = net(img)
            test_l = loss(outputs,label)
            # writer.add_scalar('test_loss',loss)
            total_test_loss += test_l.item()
            acc = (outputs.argmax(1)==label).sum() #预测正确的个数
            total_test_acc += acc.item()
    writer.add_scalar('test_accuary',total_test_acc/len(test_data))
    print('测试集上整体损失%.6f'%(total_test_loss))
    print('测试集上准确率%.6f' % (total_test_acc/len(test_data)))

writer.close()

运行结果

训练数据个数:60000,测试数据个数10000
----------1轮训练开始 ----------
train_step:100 loss:2.298241
train_step:200 loss:2.043258
train_step:300 loss:0.473294
train_step:400 loss:0.333394
train_step:500 loss:0.263733
train_step:600 loss:0.231876
train_step:700 loss:0.085514
train_step:800 loss:0.145711
train_step:900 loss:0.085465
测试集上整体损失16.788166
测试集上准确率0.967400
----------2轮训练开始 ----------
train_step:1000 loss:0.034523
train_step:1100 loss:0.147448
train_step:1200 loss:0.021179
train_step:1300 loss:0.036029
train_step:1400 loss:0.168612
train_step:1500 loss:0.074852
train_step:1600 loss:0.016319
train_step:1700 loss:0.191412
train_step:1800 loss:0.056502
测试集上整体损失11.838098
测试集上准确率0.974800
----------3轮训练开始 ----------
train_step:1900 loss:0.148853
train_step:2000 loss:0.064545
train_step:2100 loss:0.011961
train_step:2200 loss:0.037904
train_step:2300 loss:0.013984
train_step:2400 loss:0.005548
train_step:2500 loss:0.019492
train_step:2600 loss:0.011727
train_step:2700 loss:0.060610
train_step:2800 loss:0.021750
测试集上整体损失10.237322
测试集上准确率0.978400
----------4轮训练开始 ----------
train_step:2900 loss:0.019863
train_step:3000 loss:0.014899
train_step:3100 loss:0.023371
train_step:3200 loss:0.151352
train_step:3300 loss:0.077075
train_step:3400 loss:0.060291
train_step:3500 loss:0.124298
train_step:3600 loss:0.085368
train_step:3700 loss:0.035471
测试集上整体损失5.793969
测试集上准确率0.987700
----------5轮训练开始 ----------
train_step:3800 loss:0.060491
train_step:3900 loss:0.029705
train_step:4000 loss:0.148966
train_step:4100 loss:0.014026
train_step:4200 loss:0.080660
train_step:4300 loss:0.008693
train_step:4400 loss:0.015480
train_step:4500 loss:0.005879
train_step:4600 loss:0.106663
测试集上整体损失6.698459
测试集上准确率0.987300
----------6轮训练开始 ----------
train_step:4700 loss:0.027122
train_step:4800 loss:0.011732
train_step:4900 loss:0.018630
train_step:5000 loss:0.004352
train_step:5100 loss:0.073233
train_step:5200 loss:0.031105
train_step:5300 loss:0.004299
train_step:5400 loss:0.011227
train_step:5500 loss:0.011302
train_step:5600 loss:0.015469
测试集上整体损失6.121607
测试集上准确率0.987600
----------7轮训练开始 ----------
train_step:5700 loss:0.004601
train_step:5800 loss:0.002218
train_step:5900 loss:0.059447
train_step:6000 loss:0.000993
train_step:6100 loss:0.023787
train_step:6200 loss:0.051905
train_step:6300 loss:0.016283
train_step:6400 loss:0.019285
train_step:6500 loss:0.004830
测试集上整体损失4.560344
测试集上准确率0.989800
----------8轮训练开始 ----------
train_step:6600 loss:0.009499
train_step:6700 loss:0.002952
train_step:6800 loss:0.018495
train_step:6900 loss:0.001605
train_step:7000 loss:0.036577
train_step:7100 loss:0.018263
train_step:7200 loss:0.228026
train_step:7300 loss:0.001609
train_step:7400 loss:0.005556
train_step:7500 loss:0.039095
测试集上整体损失5.819327
测试集上准确率0.989400
----------9轮训练开始 ----------
train_step:7600 loss:0.008169
train_step:7700 loss:0.005214
train_step:7800 loss:0.001961
train_step:7900 loss:0.001345
train_step:8000 loss:0.040367
train_step:8100 loss:0.021877
train_step:8200 loss:0.001126
train_step:8300 loss:0.023062
train_step:8400 loss:0.018768
测试集上整体损失5.056923
测试集上准确率0.988800
----------10轮训练开始 ----------
train_step:8500 loss:0.007846
train_step:8600 loss:0.059039
train_step:8700 loss:0.005687
train_step:8800 loss:0.008194
train_step:8900 loss:0.002039
train_step:9000 loss:0.004759
train_step:9100 loss:0.001194
train_step:9200 loss:0.001492
train_step:9300 loss:0.000404
测试集上整体损失5.319184
测试集上准确率0.988800

你可能感兴趣的:(深度学习,pytorch,深度学习,神经网络)