优化算法3,4--Adagrad算法与RMSProp算法

Adagrad
思想:如果一个参数的梯度一直都非常大,就让它的学习率变小一点,防止震动,反之,则让其学习率变大,使其能更快更新
做法:学习率由下列式子所得

+后的参数是为了防止分母等于0,一般取10的-10次方
对每个参数,初始化一个变量s=0,每次参数更新时,将梯度平方求和累加到s上
所以梯度越大,累加得s越大,学习率越小
缺点:到后期,分母越来越大,学习率会变得较小,无法较好的收敛

from torch.utils.data import DataLoader
from torch import nn
from torch.autograd import Variable
import time
import matplotlib.pyplot as plt
import numpy as np
import torch
from torchvision.datasets import MNIST

def data_tf(x):
    x = np.array(x, dtype='float32') / 255
    x = (x - 0.5) / 0.5
    x = x.reshape((-1,))
    x = torch.from_numpy(x)
    return x

train_set = MNIST('./data', train=True, transform=data_tf, download=True)
test_set = MNIST('./data', train=False, transform=data_tf, download=True)

criterion = nn.CrossEntropyLoss()
train_data = DataLoader(train_set, batch_size=64, shuffle=True)

net = nn.Sequential(
    nn.Linear(784, 200),
    nn.ReLU(),
    nn.Linear(200, 10)
)

optimizer = torch.optim.Adagrad(net.parameters(),lr=1e-2)

losses = []
start = time.time()
idx = 0
for e in range(5):
    train_loss = 0
    for im, label in train_data:
        im = Variable(im)
        label = Variable(label)
        out = net(im)
        loss = criterion(out, label)
        net.zero_grad()
        loss.backward()
        optimizer.step()
        train_loss += loss.data
        if idx % 30 == 0:
            losses.append(loss.data)
        idx += 1

    print('epoch: {}, Train Loss: {:.6f}'.format(e, train_loss / len(train_data)))
end = time.time()
print('时间: {:.5f} s'.format(end - start))

为了解决Adagrad的缺点,RMSProp做了如下改进:
在这里插入图片描述
给了历史累加的s一个权重,使其到后面不至于过大
实现也很简单
在这里插入图片描述

你可能感兴趣的:(深度学习)