SGD / Ranger21训练se-resnet18 [ cifar-10 ]

这两天看到了一个叫Ranger21(github / arxiv)的训练器,写的是将最新的深度学习组件集成到单个优化器中,以AdamW优化器作为其核心(或可选的MadGrad)、自适应梯度剪裁、梯度中心化、正负动量、稳定权值衰减、线性学习率warm-up、Lookahead、Softplus变换、梯度归一化等,有些技术我也没接触过,反正听着很厉害。

于是在Imagenette(github),Imagenette是Imagenet中10个易于分类的类的子集,训练集每类大概900多张,验证集每类大概400张左右,用Xception试了一下,如下图所示:
SGD / Ranger21训练se-resnet18 [ cifar-10 ]_第1张图片
acc方面ranger21可以超过90%,而sgd只有81%(没仔细调参),似乎用起来比sgd更简单一点,不仅快而且泛化性还强(注:二者用一样的学习率,ranger21自带的学习率策略是warmup – stable – warmdown,sgd用的余弦退火),但是几次实验下来发现ranger21总是在训练末期,验证集上的损失会上升,百度了一下可能原因是这个,意思是模型过于极端,在个别预测错误的样本上损失太大,因此拉大了整体损失,但不怎么影响准确度。

之后还是在cifar10上进行了一下实验,模型采用的是pre-activation的resnet18(但其实记得论文说pre-act对浅层用处不大),并加上了squeeze-excitation模块,即se-preact-resnet18代码如下所示:

import torch
from torch import nn


class SEBlock(nn.Module):
    def __init__(self, in_planes, planes, stride=1):
        
        super(SEBlock, self).__init__()
        
        self.residual = nn.Sequential(
            nn.BatchNorm2d(in_planes),
            nn.ReLU(inplace=True),
            nn.Conv2d(in_planes, planes, kernel_size=3, stride=stride, padding=1, bias=False),
            nn.BatchNorm2d(planes),
            nn.ReLU(inplace=True),
            nn.Conv2d(planes, planes, kernel_size=3, stride=1, padding=1, bias=False)
        )
        
        if stride != 1 or in_planes != planes:
            self.shortcut = nn.Conv2d(in_planes, planes, kernel_size=1, stride=stride, bias=False)
        
        # SE layer
        self.se_layer = nn.Sequential(
            nn.AdaptiveAvgPool2d((1, 1)),
            nn.Conv2d(planes, planes//16, kernel_size=1),
            nn.ReLU(inplace=True),
            nn.Conv2d(planes//16, planes, kernel_size=1),
            nn.Sigmoid()
        )
    
    def forward(self, x):
        residual = self.residual(x)
        se_attention = self.se_layer(residual)
        residual *= se_attention
        
        shortcut = self.shortcut(x) if hasattr(self, 'shortcut') else x
        
        out = residual + shortcut
        
        return out


class SENet(nn.Module):
    '''
    SE-preact-resnet
    注意:
    ***为了cifar10 32*32修改的, 去掉一个卷积的stride与最开始的maxpooling***
    '''
    def __init__(self, block, num_blocks, num_classes=10):
        super(SENet, self).__init__()
        
        self.in_planes = 64
        
        self.conv1 = nn.Conv2d(3, 64, kernel_size=3, stride=1, padding=1, bias=False)
        self.bn1 = nn.BatchNorm2d(64)
        self.relu1 = nn.ReLU(inplace=True)
        
        self.layer1 = self._make_layer(block, 64, num_blocks[0], stride=1)
        self.layer2 = self._make_layer(block, 128, num_blocks[1], stride=2)
        self.layer3 = self._make_layer(block, 256, num_blocks[2], stride=2)
        self.layer4 = self._make_layer(block, 512, num_blocks[3], stride=2)
        
        self.gap = nn.AdaptiveAvgPool2d((1, 1))
        self.fc = nn.Linear(512, num_classes)
        

    def _make_layer(self, block, planes, num_blocks, stride):
        strides = [stride] + [1]*(num_blocks-1)
        layers = []
        for stride in strides:
            layers.append(block(self.in_planes, planes, stride))
            self.in_planes = planes 
        return nn.Sequential(*layers)
    
    def forward(self, x):
        out = self.relu1(self.bn1(self.conv1(x)))
        
        out = self.layer1(out)
        out = self.layer2(out)
        out = self.layer3(out)
        out = self.layer4(out)
        
        out = self.gap(out)
        out = torch.flatten(out, start_dim=1)
        out = self.fc(out)
        
        return out


def se_preact_resnet18():
    return SENet(SEBlock, [2, 2, 2, 2])

def se_preact_resnet34():
    return SENet(SEBlock, [3, 4, 6, 3])


if __name__ == '__main__':
    from torchstat import stat
    net = se_preact_resnet18()
    stat(net, (3, 32, 32))

SGD / Ranger21训练se-resnet18 [ cifar-10 ]_第2张图片
SGD / Ranger21训练se-resnet18 [ cifar-10 ]_第3张图片

Ranger21同样后期验证集误差会增大,不会是通病吧?这回SGD表现更好点,真就炼丹呗…

可能调调超参数又会有不同的结果,这就不得而知了!

最后,Ranger21还有个问题,就是会拖慢速度,batchsize为128时,训练8iters/s,用SGD能有25iters/s,eval时也一样变慢,竟然会慢个2~3倍?!(个人体验下来是这样的)

你可能感兴趣的:(pytorch,深度学习,神经网络)