深度学习——动态调整学习率方案

在训练模型时,我们经常需要动态调整学习率。根据loss值是否有所变化或是根据epoch的训练调整学习率。这篇博文就介绍了常见的几种动态调整学习率的方法,并附以实例,让大家清楚直观的了解学习率睡训练轮数的变化情况。

例一:【StepLR
废话不多说,我们直接看代码:
以optimizer中设定的lr为基础(lr=0.2),每隔10步,learning rate乘上0.3

import torch
import torchvision
import numpy as np
net = torchvision.models.resnet34(pretrained=False)
train_params = filter(lambda p: p.requires_grad, net.parameters())

# 设定优化器
optimizer = torch.optim.SGD(train_params, momentum=0.9, weight_decay=0.0001, lr=0.2)
# 以optimizer中设定的lr为基础,每隔10步,learning rate乘上0.3
scheduler = torch.optim.lr_scheduler.StepLR(optimizer, step_size=10, gamma=0.3) 

我们直接打印出lr:

for epoch in range(30):
    #train
    print('lr: %.4f, epoch: %d'%(scheduler.get_lr()[0], epoch))
    scheduler.step()
lr: 0.2000, epoch: 0
lr: 0.2000, epoch: 1
lr: 0.2000, epoch: 2
lr: 0.2000, epoch: 3
lr: 0.2000, epoch: 4
lr: 0.2000, epoch: 5
lr: 0.2000, epoch: 6
lr: 0.2000, epoch: 7
lr: 0.2000, epoch: 8
lr: 0.2000, epoch: 9
lr: 0.0180, epoch: 10
lr: 0.0600, epoch: 11
lr: 0.0600, epoch: 12
lr: 0.0600, epoch: 13
lr: 0.0600, epoch: 14
lr: 0.0600, epoch: 15
lr: 0.0600, epoch: 16
lr: 0.0600, epoch: 17
lr: 0.0600, epoch: 18
lr: 0.0600, epoch: 19
lr: 0.0054, epoch: 20
lr: 0.0180, epoch: 21
lr: 0.0180, epoch: 22
lr: 0.0180, epoch: 23
lr: 0.0180, epoch: 24
lr: 0.0180, epoch: 25
lr: 0.0180, epoch: 26
lr: 0.0180, epoch: 27
lr: 0.0180, epoch: 28
lr: 0.0180, epoch: 29

我们可以看到:在第0-第9轮 lr:0.2
第11-第19轮 lr:0.06
第21-第29轮 lr:0.018

例二:【ExponentialLR
后一轮的学习率是前一轮学习率的指数倍

import torch
import torchvision
import numpy as np
net = torchvision.models.resnet34(pretrained=False)
train_params = filter(lambda p: p.requires_grad, net.parameters())

# 设定优化器
optimizer = torch.optim.SGD(train_params, momentum=0.9, weight_decay=0.0001, lr=0.2)
# lr=0.2,每隔1步,lr=lr^0.99
scheduler = torch.optim.lr_scheduler.ExponentialLR(optimizer, gamma=0.99)

打印学习率:

for epoch in range(30):
    #train
    print('lr: %.4f, epoch: %d'%(scheduler.get_lr()[0], epoch))
    scheduler.step()
lr: 0.2000, epoch: 0
lr: 0.1960, epoch: 1
lr: 0.1941, epoch: 2
lr: 0.1921, epoch: 3
lr: 0.1902, epoch: 4
lr: 0.1883, epoch: 5
lr: 0.1864, epoch: 6
lr: 0.1845, epoch: 7
lr: 0.1827, epoch: 8
lr: 0.1809, epoch: 9
lr: 0.1791, epoch: 10
lr: 0.1773, epoch: 11
lr: 0.1755, epoch: 12
lr: 0.1737, epoch: 13
lr: 0.1720, epoch: 14
lr: 0.1703, epoch: 15
lr: 0.1686, epoch: 16
lr: 0.1669, epoch: 17
lr: 0.1652, epoch: 18
lr: 0.1636, epoch: 19
lr: 0.1619, epoch: 20
lr: 0.1603, epoch: 21
lr: 0.1587, epoch: 22
lr: 0.1571, epoch: 23
lr: 0.1556, epoch: 24
lr: 0.1540, epoch: 25
lr: 0.1525, epoch: 26
lr: 0.1509, epoch: 27
lr: 0.1494, epoch: 28
lr: 0.1479, epoch: 29

例三:【MultiStepLR
直接看代码:
请注意milestones这个参数。如果你想让哪一步的epoch学习率发生变化,直接写入milesstones。比如我想让第5步和第18步的学习率变为之前的0.1倍:milestones=[5, 18]

import torch
import torchvision
import numpy as np

net = torchvision.models.resnet34(pretrained=False)
train_params = filter(lambda p: p.requires_grad, net.parameters())

optimizer = torch.optim.SGD(train_params, momentum=0.9, weight_decay=0.0001, lr=0.2)
#第5步的lr是0.2 *0.1,第18步的lr是0.2*0.1*0.1
scheduler = torch.optim.lr_scheduler.MultiStepLR(optimizer, milestones=[5, 18], gamma=0.1)

打印学习率:第5步的lr是0.2 0.1,第18步的lr是0.20.1*0.1

for epoch in range(30):
    #train
    print('lr: %.4f, epoch: %d'%(scheduler.get_lr()[0], epoch))
    scheduler.step()
lr: 0.2000, epoch: 0
lr: 0.2000, epoch: 1
lr: 0.2000, epoch: 2
lr: 0.2000, epoch: 3
lr: 0.2000, epoch: 4
lr: 0.0020, epoch: 5
lr: 0.0200, epoch: 6
lr: 0.0200, epoch: 7
lr: 0.0200, epoch: 8
lr: 0.0200, epoch: 9
lr: 0.0200, epoch: 10
lr: 0.0200, epoch: 11
lr: 0.0200, epoch: 12
lr: 0.0200, epoch: 13
lr: 0.0200, epoch: 14
lr: 0.0200, epoch: 15
lr: 0.0200, epoch: 16
lr: 0.0200, epoch: 17
lr: 0.0002, epoch: 18
lr: 0.0020, epoch: 19
lr: 0.0020, epoch: 20
lr: 0.0020, epoch: 21
lr: 0.0020, epoch: 22
lr: 0.0020, epoch: 23
lr: 0.0020, epoch: 24
lr: 0.0020, epoch: 25
lr: 0.0020, epoch: 26
lr: 0.0020, epoch: 27
lr: 0.0020, epoch: 28
lr: 0.0020, epoch: 29

你可能感兴趣的:(科研日常,笔记,深度学习,学习,人工智能)