WRN28_10 on CIFAR100 精度77.54%

时间20210502
作者:知道许多的橘子
实现:WRN28_04对CIFAR-100数据集的分类
测试集准确度:77.54%
实现框架pytorch
数据增强方法:Normalize+Fix等
训练次数:200
阶段学习率[0-200]:smooth_step(10,40,100,150,epoch_s)
优化器optimizer = torch.optim.SGD(model.parameters(),lr=smooth_step(10,40,100,150,epoch_s), momentum=0.9,weight_decay=1e-5)
如果感觉算力不够用了,或者心疼自己电脑了!
可以用我实验室的算力,试试呢!
害,谁叫我的算力都用不完呢!
支持所有框架!实际上框架都配置好了!
傻瓜式云计算!
Tesla v100 1卡,2卡,4卡,8卡
内存16-128G
cpu:8-24核
想要?加个微信:15615634293
欢迎打扰!
# In[1] 导入所需工具包
from __future__ import division
import os
import torch
import torch.nn as nn
import torchvision
from torchvision import datasets,transforms
import torch.utils.data as Data
import time
from torch.nn import functional as F
from math import floor, ceil
import math
import numpy as np
import sys
sys.path.append(r'/home/megstudio/workspace/')
from FMIX.fmix import sample_and_apply, sample_mask
#import torchvision.transforms as transforms
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
print(device)
import random

from torch.autograd import Variable
# In[1] 设置超参数
num_epochs = 200
batch_size = 100
tbatch_size = 100
test_name = '/_W28_10_BJGB3_BLN_S_C100'

# In[1] 加载数据
# 定义对数据的预处理
train_transform = transforms.Compose([
            transforms.RandomCrop(32, padding=4),  # 先四周填充0,在吧图像随机裁剪成32*32
            transforms.RandomHorizontalFlip(),  # 图像一半的概率翻转,一半的概率不翻转
            transforms.ToTensor(),
            transforms.Normalize((0.4914, 0.4822, 0.4465), (0.2023, 0.1994, 0.2010))])

test_transform = transforms.Compose([
            transforms.ToTensor(),
            transforms.Normalize((0.4914, 0.4822, 0.4465), (0.2023, 0.1994, 0.2010))])

# 训练集
trainset = torchvision.datasets.CIFAR100(
                    root='/home/megstudio/dataset/dataset-2097/file-1251/CIFAR100',
                    train=True,
                    download=False,
                    transform=train_transform)
#print(type(trainset.train_data))
train_loader = Data.DataLoader(
                    dataset=trainset,
                    batch_size=batch_size,
                    shuffle=True,
                    num_workers=2)



#得到测试集的数据
testset = torchvision.datasets.CIFAR100(
                    root='/home/megstudio/dataset/dataset-2097/file-1251/CIFAR100',
                    train=False,
                    download=False,
                    transform=test_transform)

test_loader = Data.DataLoader(
                    dataset=testset,
                    batch_size=tbatch_size,
                    shuffle=False,
                    num_workers=2)

# In[1] 加载模型

# Code modified from https://github.com/xternalz/WideResNet-pytorch

class BasicBlock(nn.Module):
    def __init__(self, in_planes, out_planes, stride, dropRate=0.0):
        super(BasicBlock, self).__init__()
        self.bn1 = nn.BatchNorm2d(in_planes)
        self.relu1 = nn.ReLU(inplace=True)
        self.conv1 = nn.Conv2d(in_planes, out_planes, kernel_size=3, stride=stride,
                               padding=1, bias=False)
        self.bn2 = nn.BatchNorm2d(out_planes)
        self.relu2 = nn.ReLU(inplace=True)
        self.conv2 = nn.Conv2d(out_planes, out_planes, kernel_size=3, stride=1,
                               padding=1, bias=False)
        self.droprate = dropRate
        self.equalInOut = (in_planes == out_planes)
        self.convShortcut = (not self.equalInOut) and nn.Conv2d(in_planes, out_planes, kernel_size=1, stride=stride,
                                                                padding=0, bias=False) or None

    def forward(self, x):
        if not self.equalInOut:
            x = self.relu1(self.bn1(x))
        else:
            out = self.relu1(self.bn1(x))
        out = self.relu2(self.bn2(self.conv1(out if self.equalInOut else x)))
        if self.droprate > 0:
            out = F.dropout(out, p=self.droprate, training=self.training)
        out = self.conv2(out)
        return torch.add(x if self.equalInOut else self.convShortcut(x), out)


class NetworkBlock(nn.Module):
    def __init__(self, nb_layers, in_planes, out_planes, block, stride, dropRate=0.0):
        super(NetworkBlock, self).__init__()
        self.layer = self._make_layer(block, in_planes, out_planes, nb_layers, stride, dropRate)

    def _make_layer(self, block, in_planes, out_planes, nb_layers, stride, dropRate):
        layers = []
        for i in range(nb_layers):
            layers.append(block(i == 0 and in_planes or out_planes, out_planes, i == 0 and stride or 1, dropRate))
        return nn.Sequential(*layers)

    def forward(self, x):
        return self.layer(x)


class WideResNet(nn.Module):
    def __init__(self, depth, num_classes, widen_factor=1, dropRate=0.0, nc=1):
        super(WideResNet, self).__init__()
        nChannels = [16, 16 * widen_factor, 32 * widen_factor, 64 * widen_factor]
        assert (depth - 4) % 6 == 0, 'depth should be 6n+4'
        n = (depth - 4) // 6
        block = BasicBlock
        # 1st conv before any network block
        self.conv1 = nn.Conv2d(nc, nChannels[0], kernel_size=3, stride=1,
                               padding=1, bias=False)
        # 1st block
        self.block1 = NetworkBlock(n, nChannels[0], nChannels[1], block, 1, dropRate)
        # 2nd block
        self.block2 = NetworkBlock(n, nChannels[1], nChannels[2], block, 2, dropRate)
        # 3rd block
        self.block3 = NetworkBlock(n, nChannels[2], nChannels[3], block, 2, dropRate)
        # global average pooling and classifier
        self.bn1 = nn.BatchNorm2d(nChannels[3])
        self.relu = nn.ReLU(inplace=True)
        self.fc = nn.Linear(nChannels[3], num_classes)
        self.nChannels = nChannels[3]

        for m in self.modules():
            if isinstance(m, nn.Conv2d):
                n = m.kernel_size[0] * m.kernel_size[1] * m.out_channels
                m.weight.data.normal_(0, math.sqrt(2. / n))
            elif isinstance(m, nn.BatchNorm2d):
                m.weight.data.fill_(1)
                m.bias.data.zero_()
            elif isinstance(m, nn.Linear):
                m.bias.data.zero_()

    def forward(self, x):
        out = self.conv1(x)
        out = self.block1(out)
        out = self.block2(out)
        out = self.block3(out)
        out = self.relu(self.bn1(out))
        out = F.avg_pool2d(out, 7)
        out = out.view(-1, self.nChannels)
        return self.fc(out)


def wrn(**kwargs):
    """
    Constructs a Wide Residual Networks.
    """
    model = WideResNet(**kwargs)
    return model


def smooth_step(a,b,c,d,x):
    level_s=0.01
    level_m=0.1
    level_n=0.01
    level_r=0.005
    if x<=a:
        return level_s
    if a<x<=b:
        return (((x-a)/(b-a))*(level_m-level_s)+level_s)
    if b<x<=c:
        return level_m
    if c<x<=d:
        return level_n
    if d<x:
        return level_r
# In[1] 设置一个通过优化器更新学习率的函数
def update_lr(optimizer, lr):
    for param_group in optimizer.param_groups:
        param_group['lr'] = lr
# In[1] 定义测试函数
def test(model,test_loader):
    
    model.eval()
    with torch.no_grad():
        correct = 0
        total = 0
        for images, labels in test_loader:
            images = images.to(device)
            labels = labels.to(device)
            outputs= model(images)

            _, predicted = torch.max(outputs.data, 1)

            total += labels.size(0)
            correct += (predicted == labels).sum().item()
        acc=100 * correct / total
    
        print('Accuracy of the model on the test images: {} %'.format(acc))
    return acc

# In[1] 定义模型和损失函数
# =============================================================================
def mkdir(path):
    folder = os.path.exists(path)
    if not folder:                   #判断是否存在文件夹如果不存在则创建为文件夹
        os.makedirs(path)            #makedirs 创建文件时如果路径不存在会创建这个路径
        print("---  new folder...  ---")
        print("---  OK  ---")
    else:
        print("---  There is this folder!  ---")
path = os.getcwd()
path = path+test_name
print(path)
mkdir(path)             #调用函数
# In[1] 定义模型和损失函数
#$$
# =============================================================================
try:
    model = torch.load(path+'/model.pkl').to(device)
    epoch_s = np.load(path+'/learning_rate.npy')
    #learning_rate *= 3
    print(epoch_s)
    train_loss = np.load(path+'/test_acc.npy').tolist()
    test_acc = np.load(path+'/test_acc.npy').tolist()
    print("---  There is a model in the folder...  ---")
except:
    print("---  Create a new model...  ---")
    epoch_s = 0
    model = wrn(depth=28, num_classes=100, widen_factor=10, dropRate=0.4, nc=3).to(device)
    train_loss=[]#准备放误差     
    test_acc=[]#准备放测试准确率
# =============================================================================
def saveModel(model,epoch,test_acc,train_loss):
    torch.save(model, path+'/model.pkl')
    # torch.save(model.state_dict(), 'resnet.ckpt')
    epoch_save=np.array(epoch)
    np.save(path+'/learning_rate.npy',epoch_save)
    test_acc=np.array(test_acc)
    np.save(path+'/test_acc.npy',test_acc)
    train_loss=np.array(train_loss)
    np.save(path+'/train_loss.npy',train_loss)
    
criterion = nn.CrossEntropyLoss()
#optimizer = torch.optim.Adam(model.parameters(), lr=learning_rate, weight_decay=1e-5)
optimizer = torch.optim.SGD(model.parameters(),lr=smooth_step(10,40,100,150,epoch_s), momentum=0.9,weight_decay=1e-5) 
#scheduler = torch.optim.lr_scheduler.MultiStepLR(optimizer,  milestones = [100, 150], gamma = 0.1, last_epoch=-1)
# In[1] 训练模型更新学习率
total_step = len(train_loader)
#curr_lr = learning_rate
for epoch in range(epoch_s, num_epochs):
    #scheduler.step()
    in_epoch = time.time()
    for i, (images, labels) in enumerate(train_loader):
        
        # =============================================================================
        # FMiX
        # print(type(images))
        images, index, lam = sample_and_apply(images, alpha=1, decay_power=3, shape=(32,32))
        images = images.type(torch.FloatTensor) 
        shuffled_label = labels[index].to(device)
        
        images = images.to(device)
        labels = labels.to(device)
        # Forward pass
        outputs = model(images)
        loss = lam*criterion(outputs, labels) + (1-lam)*criterion(outputs, shuffled_label)
        # loss = criterion(outputs, labels)
        # =============================================================================

        # Backward and optimize
        optimizer.zero_grad()
        loss.backward(retain_graph=False)
        optimizer.step()
        
        if (i + 1) % 100 == 0:
            print("Epoch [{}/{}], Step [{}/{}] Loss: {:.4f}"
                  .format(epoch + 1, num_epochs, i + 1, total_step, loss.item()))
    # 记录误差和精度
    train_loss.append(loss.item())
    acctemp = test(model, test_loader)
    test_acc.append(acctemp)
    # 更新学习率
    curr_lr = smooth_step(10, 40, 100, 150, epoch)
    update_lr(optimizer, curr_lr)
    # 保存模型和一些参数、指标
    saveModel(model, epoch, test_acc, train_loss)
    # 记录时间
    out_epoch = time.time()
    print(f"use {(out_epoch-in_epoch)//60}min{(out_epoch-in_epoch)%60}s")
#$$
# In[1] 测试训练集
test(model, train_loader)

cuda
/home/megstudio/workspace/__20210503/_W28_10_BJGB3_BLN_S_C100
---  There is this folder!  ---
141
---  There is a model in the folder...  ---
Epoch [142/200], Step [100/500] Loss: 1.5528
Epoch [142/200], Step [200/500] Loss: 1.3974
Epoch [142/200], Step [300/500] Loss: 1.1613
Epoch [142/200], Step [400/500] Loss: 0.7195
Epoch [142/200], Step [500/500] Loss: 1.2905
Accuracy of the model on the test images: 77.13 %
use 1.0min24.603556156158447s
Epoch [143/200], Step [100/500] Loss: 1.2057
Epoch [143/200], Step [200/500] Loss: 1.0886
Epoch [143/200], Step [300/500] Loss: 1.0682
Epoch [143/200], Step [400/500] Loss: 1.0659
Epoch [143/200], Step [500/500] Loss: 1.1827
Accuracy of the model on the test images: 77.21 %
use 1.0min24.767743587493896s
Epoch [144/200], Step [100/500] Loss: 1.0373
Epoch [144/200], Step [200/500] Loss: 1.1467
Epoch [144/200], Step [300/500] Loss: 0.9517
Epoch [144/200], Step [400/500] Loss: 1.4304
Epoch [144/200], Step [500/500] Loss: 1.1392
Accuracy of the model on the test images: 77.5 %
use 1.0min24.99666929244995s
Epoch [145/200], Step [100/500] Loss: 1.2899
Epoch [145/200], Step [200/500] Loss: 1.0426
Epoch [145/200], Step [300/500] Loss: 0.0220
Epoch [145/200], Step [400/500] Loss: 0.6525
Epoch [145/200], Step [500/500] Loss: 0.8098
Accuracy of the model on the test images: 77.54 %
use 1.0min24.863646030426025s
Epoch [146/200], Step [100/500] Loss: 1.2013
Epoch [146/200], Step [200/500] Loss: 0.2375
Epoch [146/200], Step [300/500] Loss: 1.1518
Epoch [146/200], Step [400/500] Loss: 1.0232
Epoch [146/200], Step [500/500] Loss: 0.7494
Accuracy of the model on the test images: 77.28 %
use 1.0min24.85533905029297s
Epoch [147/200], Step [100/500] Loss: 1.1894
Epoch [147/200], Step [200/500] Loss: 0.8623
Epoch [147/200], Step [300/500] Loss: 1.0932
Epoch [147/200], Step [400/500] Loss: 1.3523
Epoch [147/200], Step [500/500] Loss: 0.6001
Accuracy of the model on the test images: 77.16 %
use 1.0min24.895694732666016s
Epoch [148/200], Step [100/500] Loss: 1.0513
Epoch [148/200], Step [200/500] Loss: 0.4905
Epoch [148/200], Step [300/500] Loss: 0.8271
Epoch [148/200], Step [400/500] Loss: 0.9393
Epoch [148/200], Step [500/500] Loss: 1.0403
Accuracy of the model on the test images: 77.31 %
use 1.0min24.836127758026123s
Epoch [149/200], Step [100/500] Loss: 1.1360
Epoch [149/200], Step [200/500] Loss: 1.0359
Epoch [149/200], Step [300/500] Loss: 1.3156
Epoch [149/200], Step [400/500] Loss: 0.4623
Epoch [149/200], Step [500/500] Loss: 1.2615
Accuracy of the model on the test images: 77.3 %
use 1.0min24.926633834838867s
Epoch [150/200], Step [100/500] Loss: 1.2978
Epoch [150/200], Step [200/500] Loss: 1.0179
Epoch [150/200], Step [300/500] Loss: 0.8285
Epoch [150/200], Step [400/500] Loss: 0.8745
Epoch [150/200], Step [500/500] Loss: 1.1417
Accuracy of the model on the test images: 77.26 %
use 1.0min24.936713457107544s
Epoch [151/200], Step [100/500] Loss: 0.7925
Epoch [151/200], Step [200/500] Loss: 0.8275
Epoch [151/200], Step [300/500] Loss: 1.1001
Epoch [151/200], Step [400/500] Loss: 1.1221
Epoch [151/200], Step [500/500] Loss: 1.0642
Accuracy of the model on the test images: 77.04 %
use 1.0min24.919219493865967s
Epoch [152/200], Step [100/500] Loss: 0.5912
Epoch [152/200], Step [200/500] Loss: 1.3527
Epoch [152/200], Step [300/500] Loss: 1.1650
Epoch [152/200], Step [400/500] Loss: 1.0603
Epoch [152/200], Step [500/500] Loss: 0.8612
Accuracy of the model on the test images: 77.11 %
use 1.0min24.87514591217041s
Epoch [153/200], Step [100/500] Loss: 1.0607
Epoch [153/200], Step [200/500] Loss: 1.1656
Epoch [153/200], Step [300/500] Loss: 1.2327
Epoch [153/200], Step [400/500] Loss: 1.3619
Epoch [153/200], Step [500/500] Loss: 0.4158
Accuracy of the model on the test images: 77.34 %
use 1.0min24.89460325241089s
Epoch [154/200], Step [100/500] Loss: 0.9700
Epoch [154/200], Step [200/500] Loss: 1.2014
Epoch [154/200], Step [300/500] Loss: 1.0489
Epoch [154/200], Step [400/500] Loss: 0.5059
Epoch [154/200], Step [500/500] Loss: 0.9684
Accuracy of the model on the test images: 77.43 %
use 1.0min24.93102216720581s
Epoch [155/200], Step [100/500] Loss: 0.7882
Epoch [155/200], Step [200/500] Loss: 0.9793
Epoch [155/200], Step [300/500] Loss: 0.6647
Epoch [155/200], Step [400/500] Loss: 1.0649
Epoch [155/200], Step [500/500] Loss: 1.0929
Accuracy of the model on the test images: 77.45 %
use 1.0min24.904707431793213s
Epoch [156/200], Step [100/500] Loss: 0.7062
Epoch [156/200], Step [200/500] Loss: 1.1404
Epoch [156/200], Step [300/500] Loss: 1.0168
Epoch [156/200], Step [400/500] Loss: 1.1886
Epoch [156/200], Step [500/500] Loss: 1.1518
Accuracy of the model on the test images: 77.45 %
use 1.0min24.885696172714233s
Epoch [157/200], Step [100/500] Loss: 0.4824
Epoch [157/200], Step [200/500] Loss: 0.9724
Epoch [157/200], Step [300/500] Loss: 1.3029
Accuracy of the model on the test images: 77.41 %
use 1.0min24.90853261947632s
Epoch [158/200], Step [100/500] Loss: 1.2859
Epoch [158/200], Step [200/500] Loss: 0.0960
Epoch [158/200], Step [300/500] Loss: 1.1832
Epoch [158/200], Step [400/500] Loss: 1.1920
Epoch [158/200], Step [500/500] Loss: 0.7198
Accuracy of the model on the test images: 77.46 %
use 1.0min24.850754737854004s
Epoch [159/200], Step [100/500] Loss: 0.7180
Epoch [159/200], Step [200/500] Loss: 1.1089
Epoch [159/200], Step [300/500] Loss: 0.6436
Epoch [159/200], Step [400/500] Loss: 1.2547
Epoch [159/200], Step [500/500] Loss: 0.7170
Accuracy of the model on the test images: 77.32 %
use 1.0min24.879122257232666s
Epoch [160/200], Step [100/500] Loss: 0.0446
Epoch [160/200], Step [200/500] Loss: 0.7195
Epoch [160/200], Step [300/500] Loss: 0.7228
Epoch [160/200], Step [400/500] Loss: 0.9934
Epoch [160/200], Step [500/500] Loss: 0.4522
Accuracy of the model on the test images: 77.47 %
use 1.0min24.883201599121094s
Epoch [161/200], Step [100/500] Loss: 0.9690
Epoch [161/200], Step [200/500] Loss: 0.6798
Epoch [161/200], Step [300/500] Loss: 0.2910
Epoch [161/200], Step [400/500] Loss: 0.7862
Epoch [161/200], Step [500/500] Loss: 0.6298
Accuracy of the model on the test images: 77.45 %
use 1.0min24.874440670013428s
Epoch [162/200], Step [100/500] Loss: 1.1223
Epoch [162/200], Step [200/500] Loss: 0.4533
Epoch [162/200], Step [300/500] Loss: 0.7866
Epoch [162/200], Step [400/500] Loss: 0.9070
Epoch [162/200], Step [500/500] Loss: 1.0782
Accuracy of the model on the test images: 77.39 %
use 1.0min24.910200119018555s
Epoch [163/200], Step [100/500] Loss: 1.0798
Epoch [163/200], Step [200/500] Loss: 1.0603
Epoch [163/200], Step [300/500] Loss: 1.2853
Epoch [163/200], Step [400/500] Loss: 0.9686
Epoch [163/200], Step [500/500] Loss: 0.6012
Accuracy of the model on the test images: 77.38 %
use 1.0min24.880542039871216s
Epoch [164/200], Step [100/500] Loss: 0.9798
Epoch [164/200], Step [200/500] Loss: 0.7701
Epoch [164/200], Step [300/500] Loss: 0.5847
Epoch [164/200], Step [400/500] Loss: 1.0208
Epoch [164/200], Step [500/500] Loss: 1.1857
Accuracy of the model on the test images: 77.41 %
use 1.0min24.9087815284729s
Epoch [165/200], Step [100/500] Loss: 0.0973
Epoch [165/200], Step [200/500] Loss: 1.0477
Epoch [165/200], Step [300/500] Loss: 1.1682
Epoch [165/200], Step [400/500] Loss: 0.8271
Epoch [165/200], Step [500/500] Loss: 0.9837
Accuracy of the model on the test images: 77.24 %
use 1.0min24.919323682785034s
Epoch [166/200], Step [100/500] Loss: 1.2237
Epoch [166/200], Step [200/500] Loss: 1.1585
Epoch [166/200], Step [300/500] Loss: 0.2171
Epoch [166/200], Step [400/500] Loss: 0.8880
Epoch [166/200], Step [500/500] Loss: 1.0235
Accuracy of the model on the test images: 77.23 %
use 1.0min24.873162031173706s
Epoch [167/200], Step [100/500] Loss: 1.0751
Epoch [167/200], Step [200/500] Loss: 0.8277
Epoch [167/200], Step [300/500] Loss: 1.1763
Epoch [167/200], Step [400/500] Loss: 0.8997
Epoch [167/200], Step [500/500] Loss: 0.9931
Accuracy of the model on the test images: 77.31 %
use 1.0min24.86436414718628s
Epoch [168/200], Step [100/500] Loss: 0.4940
Epoch [168/200], Step [200/500] Loss: 1.4103
Epoch [168/200], Step [300/500] Loss: 0.8065
Epoch [168/200], Step [400/500] Loss: 0.7311
Epoch [168/200], Step [500/500] Loss: 0.7017
Accuracy of the model on the test images: 77.34 %
use 1.0min24.872971057891846s
Epoch [169/200], Step [100/500] Loss: 0.8568
Epoch [169/200], Step [200/500] Loss: 0.8404
Epoch [169/200], Step [300/500] Loss: 1.2296
Epoch [169/200], Step [400/500] Loss: 1.0267
Epoch [169/200], Step [500/500] Loss: 1.2444
Accuracy of the model on the test images: 77.44 %
use 1.0min24.8505437374115s
Epoch [170/200], Step [100/500] Loss: 0.2500
Epoch [170/200], Step [200/500] Loss: 0.1967
Epoch [170/200], Step [300/500] Loss: 1.3828
Epoch [170/200], Step [400/500] Loss: 0.2522
Epoch [170/200], Step [500/500] Loss: 0.9078
Accuracy of the model on the test images: 77.15 %
use 1.0min24.85686159133911s
Epoch [171/200], Step [100/500] Loss: 0.7435
Epoch [171/200], Step [200/500] Loss: 0.9664
Epoch [171/200], Step [300/500] Loss: 0.6524
Epoch [171/200], Step [400/500] Loss: 0.7789
Epoch [171/200], Step [500/500] Loss: 1.0658
Accuracy of the model on the test images: 77.15 %
use 1.0min24.928207635879517s
Epoch [172/200], Step [100/500] Loss: 0.6148
Epoch [172/200], Step [200/500] Loss: 1.1943
Epoch [172/200], Step [300/500] Loss: 1.1530
Epoch [172/200], Step [400/500] Loss: 1.2284
Epoch [172/200], Step [500/500] Loss: 0.2776
Accuracy of the model on the test images: 77.09 %
use 1.0min24.919533252716064s
Epoch [173/200], Step [100/500] Loss: 1.2014
Epoch [173/200], Step [200/500] Loss: 0.8996
Epoch [173/200], Step [300/500] Loss: 1.0861
Epoch [173/200], Step [400/500] Loss: 1.2190
Epoch [173/200], Step [500/500] Loss: 1.1606
Accuracy of the model on the test images: 77.26 %
use 1.0min24.929797172546387s
Epoch [174/200], Step [100/500] Loss: 0.9359
Epoch [174/200], Step [200/500] Loss: 1.0329
Epoch [174/200], Step [300/500] Loss: 1.1500
Epoch [174/200], Step [400/500] Loss: 1.2400
Epoch [174/200], Step [500/500] Loss: 1.1609
Accuracy of the model on the test images: 77.12 %
use 1.0min24.91043496131897s
Epoch [175/200], Step [100/500] Loss: 1.1835
Epoch [175/200], Step [200/500] Loss: 0.2292
Epoch [175/200], Step [300/500] Loss: 1.0055
Epoch [175/200], Step [400/500] Loss: 0.9026
Epoch [175/200], Step [500/500] Loss: 0.5227
Accuracy of the model on the test images: 77.34 %
use 1.0min24.88933539390564s
Epoch [176/200], Step [100/500] Loss: 1.0955
Epoch [176/200], Step [200/500] Loss: 1.2111
Epoch [176/200], Step [300/500] Loss: 1.0780
Epoch [176/200], Step [400/500] Loss: 0.9773
Epoch [176/200], Step [500/500] Loss: 1.0328
Accuracy of the model on the test images: 77.35 %
use 1.0min24.86126685142517s
Epoch [177/200], Step [100/500] Loss: 0.5965
Epoch [177/200], Step [200/500] Loss: 1.3419
Epoch [177/200], Step [300/500] Loss: 1.1660
Epoch [177/200], Step [400/500] Loss: 0.5838
Epoch [177/200], Step [500/500] Loss: 1.0980
Accuracy of the model on the test images: 77.21 %
use 1.0min24.897165298461914s
Epoch [178/200], Step [100/500] Loss: 0.9110
Epoch [178/200], Step [200/500] Loss: 1.2895
Epoch [178/200], Step [300/500] Loss: 0.9794
Epoch [178/200], Step [400/500] Loss: 1.1204
Epoch [178/200], Step [500/500] Loss: 0.6872
Accuracy of the model on the test images: 77.29 %
use 1.0min24.90094518661499s
Epoch [179/200], Step [100/500] Loss: 1.0555
Epoch [179/200], Step [200/500] Loss: 1.1741
Epoch [179/200], Step [300/500] Loss: 0.9036
Epoch [179/200], Step [400/500] Loss: 0.9327
Epoch [179/200], Step [500/500] Loss: 0.7239
Accuracy of the model on the test images: 77.31 %
use 1.0min24.909113883972168s
Epoch [180/200], Step [100/500] Loss: 0.3442
Epoch [180/200], Step [200/500] Loss: 1.2699
Epoch [180/200], Step [300/500] Loss: 0.5119
Epoch [180/200], Step [400/500] Loss: 0.6051
Epoch [180/200], Step [500/500] Loss: 1.2284
Accuracy of the model on the test images: 77.31 %
use 1.0min24.8768048286438s
Epoch [181/200], Step [100/500] Loss: 1.0588
Epoch [181/200], Step [200/500] Loss: 0.1532
Epoch [181/200], Step [300/500] Loss: 1.0867
Epoch [181/200], Step [400/500] Loss: 0.4587
Epoch [181/200], Step [500/500] Loss: 1.0955
Accuracy of the model on the test images: 77.22 %
use 1.0min24.877432584762573s
Epoch [182/200], Step [100/500] Loss: 0.9676
Epoch [182/200], Step [200/500] Loss: 1.2658
Epoch [182/200], Step [300/500] Loss: 0.4655
Epoch [182/200], Step [400/500] Loss: 0.9771
Epoch [182/200], Step [500/500] Loss: 1.3612
Accuracy of the model on the test images: 77.27 %
use 1.0min24.851990938186646s
Epoch [183/200], Step [100/500] Loss: 0.8701
Epoch [183/200], Step [200/500] Loss: 1.0342
Epoch [183/200], Step [300/500] Loss: 1.1192
Epoch [183/200], Step [400/500] Loss: 0.9207
Epoch [183/200], Step [500/500] Loss: 0.3010
Accuracy of the model on the test images: 77.19 %
use 1.0min24.910361766815186s
Epoch [184/200], Step [100/500] Loss: 0.6419
Epoch [184/200], Step [200/500] Loss: 1.0921
Epoch [184/200], Step [300/500] Loss: 0.7282
Epoch [184/200], Step [400/500] Loss: 1.0429
Epoch [184/200], Step [500/500] Loss: 1.2620
Accuracy of the model on the test images: 77.39 %
use 1.0min24.893921852111816s
Epoch [185/200], Step [100/500] Loss: 0.9238
Epoch [185/200], Step [200/500] Loss: 0.1507
Epoch [185/200], Step [300/500] Loss: 0.6331
Epoch [185/200], Step [400/500] Loss: 1.4691
Epoch [185/200], Step [500/500] Loss: 1.3349
Accuracy of the model on the test images: 76.98 %
use 1.0min24.91034436225891s
Epoch [186/200], Step [100/500] Loss: 0.8906
Epoch [186/200], Step [200/500] Loss: 1.0360
Epoch [186/200], Step [300/500] Loss: 0.0750
Epoch [186/200], Step [400/500] Loss: 0.9380
Epoch [186/200], Step [500/500] Loss: 1.1420
Accuracy of the model on the test images: 77.01 %
use 1.0min24.90356683731079s
Epoch [187/200], Step [100/500] Loss: 1.0185
Epoch [187/200], Step [200/500] Loss: 0.9642
Epoch [187/200], Step [300/500] Loss: 0.9149
Epoch [187/200], Step [400/500] Loss: 0.7301
Epoch [187/200], Step [500/500] Loss: 0.1752
Accuracy of the model on the test images: 77.19 %
use 1.0min24.901657342910767s
Epoch [188/200], Step [100/500] Loss: 0.9279
Epoch [188/200], Step [200/500] Loss: 1.1376
Epoch [188/200], Step [300/500] Loss: 0.7112
Epoch [188/200], Step [400/500] Loss: 0.7791
Epoch [188/200], Step [500/500] Loss: 0.4140
Accuracy of the model on the test images: 77.22 %
use 1.0min24.913012981414795s
Epoch [189/200], Step [100/500] Loss: 1.4024
Epoch [189/200], Step [200/500] Loss: 1.0857
Epoch [189/200], Step [300/500] Loss: 0.6894
Epoch [189/200], Step [400/500] Loss: 1.1468
Epoch [189/200], Step [500/500] Loss: 0.7862
Accuracy of the model on the test images: 77.32 %
use 1.0min24.87160611152649s
Epoch [190/200], Step [100/500] Loss: 1.0147
Epoch [190/200], Step [200/500] Loss: 0.7954
Epoch [190/200], Step [300/500] Loss: 0.9666
Epoch [190/200], Step [400/500] Loss: 0.8557
Epoch [190/200], Step [500/500] Loss: 0.7192
Accuracy of the model on the test images: 77.34 %
use 1.0min24.86261248588562s
Epoch [191/200], Step [100/500] Loss: 0.7598
Epoch [191/200], Step [200/500] Loss: 0.6183
Epoch [191/200], Step [300/500] Loss: 0.8722
Epoch [191/200], Step [400/500] Loss: 0.6929
Epoch [191/200], Step [500/500] Loss: 1.2213
Accuracy of the model on the test images: 77.23 %
use 1.0min24.86916947364807s
Epoch [192/200], Step [100/500] Loss: 1.1682
Epoch [192/200], Step [200/500] Loss: 0.6294
Epoch [192/200], Step [300/500] Loss: 0.1511
Epoch [192/200], Step [400/500] Loss: 0.8540
Epoch [192/200], Step [500/500] Loss: 0.6844
Accuracy of the model on the test images: 77.13 %
use 1.0min24.89246892929077s
Epoch [193/200], Step [100/500] Loss: 0.9639
Epoch [193/200], Step [200/500] Loss: 1.1178
Epoch [193/200], Step [300/500] Loss: 1.0463
Epoch [193/200], Step [400/500] Loss: 0.2560
Epoch [193/200], Step [500/500] Loss: 1.0251
Accuracy of the model on the test images: 77.26 %
use 1.0min24.872381448745728s
Epoch [194/200], Step [100/500] Loss: 0.1860
Epoch [194/200], Step [200/500] Loss: 0.9395
Epoch [194/200], Step [300/500] Loss: 0.9701
Epoch [194/200], Step [400/500] Loss: 1.2206
Epoch [194/200], Step [500/500] Loss: 1.2503
Accuracy of the model on the test images: 77.38 %
use 1.0min24.91651487350464s
Epoch [195/200], Step [100/500] Loss: 1.0729
Epoch [195/200], Step [200/500] Loss: 0.3404
Epoch [195/200], Step [300/500] Loss: 0.4444
Epoch [195/200], Step [400/500] Loss: 1.0129
Epoch [195/200], Step [500/500] Loss: 0.2994
Accuracy of the model on the test images: 77.25 %
use 1.0min24.947959661483765s
Epoch [196/200], Step [100/500] Loss: 1.1287
Epoch [196/200], Step [200/500] Loss: 0.7903
Epoch [196/200], Step [300/500] Loss: 0.1774
Epoch [196/200], Step [400/500] Loss: 0.1378
Epoch [196/200], Step [500/500] Loss: 1.2125
Accuracy of the model on the test images: 77.13 %
use 1.0min24.90247106552124s
Epoch [197/200], Step [100/500] Loss: 1.1665
Epoch [197/200], Step [200/500] Loss: 0.8025
Epoch [197/200], Step [300/500] Loss: 0.3841
Epoch [197/200], Step [400/500] Loss: 0.9299
Epoch [197/200], Step [500/500] Loss: 1.0946
Accuracy of the model on the test images: 77.28 %
use 1.0min24.9243426322937s
Epoch [198/200], Step [100/500] Loss: 1.3677
Epoch [198/200], Step [200/500] Loss: 1.0165
Epoch [198/200], Step [300/500] Loss: 0.9743
Epoch [198/200], Step [400/500] Loss: 0.9030
Epoch [198/200], Step [500/500] Loss: 1.0868
Accuracy of the model on the test images: 77.24 %
use 1.0min24.896133422851562s
Epoch [199/200], Step [100/500] Loss: 0.7663
Epoch [199/200], Step [200/500] Loss: 0.9853
Epoch [199/200], Step [300/500] Loss: 1.2147
Epoch [199/200], Step [400/500] Loss: 0.7696
Epoch [199/200], Step [500/500] Loss: 0.3347
Accuracy of the model on the test images: 77.26 %
use 1.0min24.8830144405365s
Epoch [200/200], Step [100/500] Loss: 0.9849
Epoch [200/200], Step [200/500] Loss: 0.6911
Epoch [200/200], Step [300/500] Loss: 0.9499
Epoch [200/200], Step [400/500] Loss: 0.9432
Epoch [200/200], Step [500/500] Loss: 0.6021
Accuracy of the model on the test images: 77.11 %
use 1.0min24.958666801452637s
Accuracy of the model on the test images: 99.978 %
99.978

你可能感兴趣的:(笔记,深度学习,神经网络,pytorch)