NNDL 实验五 前馈神经网络(2)自动梯度计算 & 优化问题

 

目录

4.3 自动梯度计算

4.3.1 利用预定义算子重新实现前馈神经网络

4.3.2 完善Runner类

4.3.3 模型训练

4.3.4 性能评价

4.4 优化问题

4.4.1 参数初始化

4.4.2.1 模型构建

4.4.2.2 使用Sigmoid型函数进行训练

4.4.2.3 使用ReLU函数进行模型训练

4.4.3 死亡ReLU问题

4.4.3.1 使用ReLU进行模型训练

4.4.3.2 使用Leaky ReLU进行模型训练


4.3 自动梯度计算


虽然我们能够通过模块化的方式比较好地对神经网络进行组装,但是每个模块的梯度计算过程仍然十分繁琐且容易出错。在深度学习框架中,已经封装了自动梯度计算的功能,我们只需要聚焦模型架构,不再需要耗费精力进行计算梯度。

飞桨提供了paddle.nn.Layer类,来方便快速的实现自己的层和模型。模型和层都可以基于paddle.nn.Layer扩充实现,模型只是一种特殊的层。继承了paddle.nn.Layer类的算子中,可以在内部直接调用其它继承paddle.nn.Layer类的算子,飞桨框架会自动识别算子中内嵌的paddle.nn.Layer类算子,并自动计算它们的梯度,并在优化时更新它们的参数。

nn.Module是PyTorch提供的神经网络类,该类中实现了网络各层的定义及前向计算与反向传播机制。在实际使用时,如果想要实现某种神经网络模型,只需自定义模型类的时候继承nn.Module,然后在自定义类的构造函数__init__()中定义模型结构与参数,在函数forward()中编写网络前向过程即可。nn.Module可以自动利用Autograd机制实现反向传播,不需要自己手动实现。

4.3.1 利用预定义算子重新实现前馈神经网络

1. 使用pytorch的预定义算子来重新实现二分类任务。

class Model_MLP_L2_V2(nn.Module):
    def __init__(self, input_size, hidden_size, output_size):
        super(Model_MLP_L2_V2, self).__init__()
        self.fc1 = nn.Linear(input_size, hidden_size)
        normal_(self.fc1.weight, mean=0., std=1.)
        constant_(self.fc1.bias, val=0.0)
        self.fc2 = nn.Linear(hidden_size, output_size)
        normal_(self.fc2.weight, mean=0., std=1.)
        constant_(self.fc2.bias, val=0.0)
        self.act_fn = torch.sigmoid

    # 前向计算
    def forward(self, inputs):
        z1 = self.fc1(inputs)
        a1 = self.act_fn(z1)
        z2 = self.fc2(a1)
        a2 = self.act_fn(z2)
        return a2

2. 增加一个3个神经元的隐藏层,再次实现二分类,并与1做对比。


class Model_MLP_L2_V4(torch.nn.Module):
    def __init__(self, input_size, hidden_size, hidden_size2, output_size):
        super(Model_MLP_L2_V4, self).__init__()
        self.fc1 = nn.Linear(input_size, hidden_size)
        w1=torch.normal(0,0.1,size=(hidden_size,input_size),requires_grad=True)
        self.fc1.weight = nn.Parameter(w1)

        self.fc2 = nn.Linear(hidden_size, hidden_size2)
        w2 = torch.normal(0, 0.1, size=(hidden_size2, hidden_size), requires_grad=True)
        self.fc2.weight = nn.Parameter(w2)

        self.fc3 = nn.Linear(hidden_size2, output_size)
        w3 = torch.normal(0, 0.1, size=(output_size, hidden_size2), requires_grad=True)
        self.fc3.weight = nn.Parameter(w3)

        # 使用'torch.nn.functional.sigmoid'定义 Logistic 激活函数
        self.act_fn = torch.sigmoid

    # 前向计算
    def forward(self, inputs):
        z1 = self.fc1(inputs.to(torch.float32))
        a1 = self.act_fn(z1)
        z2 = self.fc2(a1)
        a2 = self.act_fn(z2)
        z3 = self.fc3(a2)
        a3 = self.act_fn(z3)
        return a3

# 设置模型
input_size = 2
hidden_size = 5
hidden_size2 = 3
output_size = 1
model = Model_MLP_L2_V4(input_size=input_size, hidden_size=hidden_size,hidden_size2=hidden_size2, output_size=output_size)

运行结果:

[Evaluate] best accuracy performence has been updated: 0.00000 --> 0.50625
[Train] epoch: 0/2000, loss: 0.7695772051811218
[Evaluate] best accuracy performence has been updated: 0.50625 --> 0.53750
[Evaluate] best accuracy performence has been updated: 0.53750 --> 0.55625
[Evaluate] best accuracy performence has been updated: 0.55625 --> 0.58125
[Evaluate] best accuracy performence has been updated: 0.58125 --> 0.60625
[Evaluate] best accuracy performence has been updated: 0.60625 --> 0.62500
[Evaluate] best accuracy performence has been updated: 0.62500 --> 0.63750
[Evaluate] best accuracy performence has been updated: 0.63750 --> 0.65000
[Evaluate] best accuracy performence has been updated: 0.65000 --> 0.66875
[Evaluate] best accuracy performence has been updated: 0.66875 --> 0.68125
[Evaluate] best accuracy performence has been updated: 0.68125 --> 0.69375
[Evaluate] best accuracy performence has been updated: 0.69375 --> 0.71250
[Evaluate] best accuracy performence has been updated: 0.71250 --> 0.73125
[Evaluate] best accuracy performence has been updated: 0.73125 --> 0.75000
[Evaluate] best accuracy performence has been updated: 0.75000 --> 0.78125
[Evaluate] best accuracy performence has been updated: 0.78125 --> 0.79375
[Train] epoch: 50/2000, loss: 0.6851401925086975
[Evaluate] best accuracy performence has been updated: 0.79375 --> 0.80000
[Evaluate] best accuracy performence has been updated: 0.80000 --> 0.80625
[Evaluate] best accuracy performence has been updated: 0.80625 --> 0.81250
[Evaluate] best accuracy performence has been updated: 0.81250 --> 0.82500
[Evaluate] best accuracy performence has been updated: 0.82500 --> 0.83125
[Evaluate] best accuracy performence has been updated: 0.83125 --> 0.83750
[Evaluate] best accuracy performence has been updated: 0.83750 --> 0.84375
[Train] epoch: 100/2000, loss: 0.6518415212631226
[Evaluate] best accuracy performence has been updated: 0.84375 --> 0.85000
[Evaluate] best accuracy performence has been updated: 0.85000 --> 0.85625
[Evaluate] best accuracy performence has been updated: 0.85625 --> 0.86250
[Evaluate] best accuracy performence has been updated: 0.86250 --> 0.86875
[Evaluate] best accuracy performence has been updated: 0.86875 --> 0.87500
[Evaluate] best accuracy performence has been updated: 0.87500 --> 0.88125
[Evaluate] best accuracy performence has been updated: 0.88125 --> 0.88750
[Train] epoch: 150/2000, loss: 0.6065284609794617
[Train] epoch: 200/2000, loss: 0.5372008085250854
[Train] epoch: 250/2000, loss: 0.4569132328033447
[Train] epoch: 300/2000, loss: 0.39426156878471375
[Evaluate] best accuracy performence has been updated: 0.88750 --> 0.89375
[Train] epoch: 350/2000, loss: 0.35365113615989685
[Train] epoch: 400/2000, loss: 0.3273332417011261
[Evaluate] best accuracy performence has been updated: 0.89375 --> 0.90000
[Train] epoch: 450/2000, loss: 0.309579461812973
[Evaluate] best accuracy performence has been updated: 0.90000 --> 0.90625
[Train] epoch: 500/2000, loss: 0.29726406931877136
[Train] epoch: 550/2000, loss: 0.2885670065879822
[Train] epoch: 600/2000, loss: 0.28232163190841675
[Train] epoch: 650/2000, loss: 0.2777450382709503
[Train] epoch: 700/2000, loss: 0.2743062973022461
[Train] epoch: 750/2000, loss: 0.27164602279663086
[Train] epoch: 800/2000, loss: 0.2695213258266449
[Train] epoch: 850/2000, loss: 0.2677682042121887
[Train] epoch: 900/2000, loss: 0.266275018453598
[Train] epoch: 950/2000, loss: 0.2649657428264618
[Train] epoch: 1000/2000, loss: 0.26378774642944336
[Train] epoch: 1050/2000, loss: 0.2627045810222626
[Train] epoch: 1100/2000, loss: 0.26169055700302124
[Train] epoch: 1150/2000, loss: 0.26072728633880615
[Train] epoch: 1200/2000, loss: 0.25980144739151
[Train] epoch: 1250/2000, loss: 0.25890326499938965
[Train] epoch: 1300/2000, loss: 0.25802522897720337
[Train] epoch: 1350/2000, loss: 0.2571617066860199
[Train] epoch: 1400/2000, loss: 0.256307989358902
[Train] epoch: 1450/2000, loss: 0.25546035170555115
[Train] epoch: 1500/2000, loss: 0.2546156048774719
[Train] epoch: 1550/2000, loss: 0.25377076864242554
[Train] epoch: 1600/2000, loss: 0.2529231011867523
[Train] epoch: 1650/2000, loss: 0.252069890499115
[Train] epoch: 1700/2000, loss: 0.25120818614959717
[Train] epoch: 1750/2000, loss: 0.25033479928970337
[Train] epoch: 1800/2000, loss: 0.24944618344306946
[Train] epoch: 1850/2000, loss: 0.24853801727294922
[Train] epoch: 1900/2000, loss: 0.2476053237915039
[Train] epoch: 1950/2000, loss: 0.2466421127319336
[Evaluate] best accuracy performence has been updated: 0.90625 --> 0.91250

 NNDL 实验五 前馈神经网络(2)自动梯度计算 & 优化问题_第1张图片

增加一个3个神经元的隐藏层后,达到最优所需的训练次数也在增多

4.3.2 完善Runner类

基于上一节实现的 RunnerV2_1 类,本节的 RunnerV2_2 类在训练过程中使用自动梯度计算;模型保存时,使用state_dict方法获取模型参数;模型加载时,使用set_state_dict方法加载模型参数.

class RunnerV2_2(nn.Module):
    def __init__(self, model, optimizer, metric, loss_fn, **kwargs):
        super().__init__()
        self.model = model
        self.optimizer = optimizer
        self.loss_fn = loss_fn
        self.metric = metric
 
        # 记录训练过程中的评估指标变化情况
        self.train_scores = []
        self.dev_scores = []
 
        # 记录训练过程中的评价指标变化情况
        self.train_loss = []
        self.dev_loss = []
 
    def train(self, train_set, dev_set, **kwargs):
        # 将模型切换为训练模式
        self.model.train()
        # 传入训练轮数,如果没有传入值则默认为0
        num_epochs = kwargs.get("num_epochs", 0)
        # 传入log打印频率,如果没有传入值则默认为100
        log_epochs = kwargs.get("log_epochs", 100)
        # 传入模型保存路径,如果没有传入值则默认为"best_model.pdparams"
        save_path = kwargs.get("save_path", "best_model.pdparams")
        # log打印函数,如果没有传入则默认为"None"
        custom_print_log = kwargs.get("custom_print_log", None)
        # 记录全局最优指标
        best_score = 0
        # 进行num_epochs轮训练
        for epoch in range(num_epochs):
            X, y = train_set
            # 获取模型预测
            logits = self.model(X)
            # 计算交叉熵损失
            trn_loss = self.loss_fn(logits, y)
            self.train_loss.append(trn_loss.item())
            # 计算评估指标
            trn_score = self.metric(logits, y).item()
            self.train_scores.append(trn_score)
 
            # 自动计算参数梯度
            trn_loss.backward()
            if custom_print_log is not None:
                # 打印每一层的梯度
                custom_print_log(self)
 
            # 参数更新
            self.optimizer.step()
            # 清空梯度
            self.optimizer.zero_grad()
 
            dev_score, dev_loss = self.evaluate(dev_set)
            # 如果当前指标为最优指标,保存该模型
            if dev_score > best_score:
                self.save_model(save_path)
                print(f"[Evaluate] best accuracy performence has been updated: {best_score:.5f} --> {dev_score:.5f}")
                best_score = dev_score
 
            if log_epochs and epoch % log_epochs == 0:
                print(f"[Train] epoch: {epoch}/{num_epochs}, loss: {trn_loss.item()}")
    # 模型评估阶段,使用'paddle.no_grad()'控制不计算和存储梯度
    @torch.no_grad()
    def evaluate(self, data_set):
        # 将模型切换为评估模式
        self.model.eval()
        X, y = data_set
        # 计算模型输出
        logits = self.model(X)
        # 计算损失函数
        loss = self.loss_fn(logits, y).item()
        self.dev_loss.append(loss)
        # 计算评估指标
        score = self.metric(logits, y).item()
        self.dev_scores.append(score)
        return score, loss
 
    # 模型测试阶段,使用'paddle.no_grad()'控制不计算和存储梯度
    @torch.no_grad()
    def predict(self, X):
        # 将模型切换为评估模式
        self.model.eval()
        return self.model(X)
 
    # 使用'model.state_dict()'获取模型参数,并进行保存
    def save_model(self, saved_path):
        torch.save(self.model.state_dict(), saved_path)
 
    # 使用'model.set_state_dict'加载模型参数
    def load_model(self, model_path):
        state_dict = torch.load(model_path)
        self.model.set_state_dict(state_dict)

4.3.3 模型训练

 实例化RunnerV2类,并传入训练配置,并将训练过程中训练集与验证集的准确率变化情况进行可视化处理

# 设置模型
input_size = 2
hidden_size = 5
output_size = 1
model = Model_MLP_L2_V2(input_size=input_size, hidden_size=hidden_size, output_size=output_size)
 
# 设置损失函数
loss_fn = F.binary_cross_entropy
# 设置优化器
optimizer = torch.optim.SGD(model.parameters(), lr=0.2)
# 设置评价指标
metric = accuracy
# 其他参数
epoch_num = 1000
saved_path = 'best_model.pdparams'
# 实例化RunnerV2类,并传入训练配置
runner = RunnerV2_2(model, optimizer, metric, loss_fn)
runner.train([X_train, y_train], [X_dev, y_dev], num_epochs=epoch_num, log_epochs=50, save_path="best_model.pdparams")
# 可视化观察训练集与验证集的指标变化情况
def plot(runner, fig_name):
    plt.figure(figsize=(10, 5))
    epochs = [i for i in range(len(runner.train_scores))]
 
    plt.subplot(1, 2, 1)
    plt.plot(epochs, runner.train_loss, color='#e4007f', label="Train loss")
    plt.plot(epochs, runner.dev_loss, color='#f19ec2', linestyle='--', label="Dev loss")
    # 绘制坐标轴和图例
    plt.ylabel("loss", fontsize='large')
    plt.xlabel("epoch", fontsize='large')
    plt.legend(loc='upper right', fontsize='x-large')
 
    plt.subplot(1, 2, 2)
    plt.plot(epochs, runner.train_scores, color='#e4007f', label="Train accuracy")
    plt.plot(epochs, runner.dev_scores, color='#f19ec2', linestyle='--', label="Dev accuracy")
    # 绘制坐标轴和图例
    plt.ylabel("score", fontsize='large')
    plt.xlabel("epoch", fontsize='large')
    plt.legend(loc='lower right', fontsize='x-large')
    plt.savefig(fig_name)
    plt.show()
 
plot(runner, 'fw-acc.pdf')

运行结果:

[Evaluate] best accuracy performence has been updated: 0.00000 --> 0.21875
[Train] epoch: 0/1000, loss: 0.7022157311439514
[Evaluate] best accuracy performence has been updated: 0.21875 --> 0.26250
[Evaluate] best accuracy performence has been updated: 0.26250 --> 0.31875
[Evaluate] best accuracy performence has been updated: 0.31875 --> 0.43750
[Evaluate] best accuracy performence has been updated: 0.43750 --> 0.48750
[Evaluate] best accuracy performence has been updated: 0.48750 --> 0.52500
[Evaluate] best accuracy performence has been updated: 0.52500 --> 0.53125
[Evaluate] best accuracy performence has been updated: 0.53125 --> 0.54375
[Evaluate] best accuracy performence has been updated: 0.54375 --> 0.55625
[Evaluate] best accuracy performence has been updated: 0.55625 --> 0.57500
[Evaluate] best accuracy performence has been updated: 0.57500 --> 0.59375
[Evaluate] best accuracy performence has been updated: 0.59375 --> 0.60625
[Evaluate] best accuracy performence has been updated: 0.60625 --> 0.63125
[Evaluate] best accuracy performence has been updated: 0.63125 --> 0.66875
[Evaluate] best accuracy performence has been updated: 0.66875 --> 0.68125
[Evaluate] best accuracy performence has been updated: 0.68125 --> 0.71875
[Evaluate] best accuracy performence has been updated: 0.71875 --> 0.72500
[Evaluate] best accuracy performence has been updated: 0.72500 --> 0.75000
[Evaluate] best accuracy performence has been updated: 0.75000 --> 0.75625
[Evaluate] best accuracy performence has been updated: 0.75625 --> 0.76875
[Evaluate] best accuracy performence has been updated: 0.76875 --> 0.78125
[Evaluate] best accuracy performence has been updated: 0.78125 --> 0.80000
[Evaluate] best accuracy performence has been updated: 0.80000 --> 0.81250
[Evaluate] best accuracy performence has been updated: 0.81250 --> 0.81875
[Evaluate] best accuracy performence has been updated: 0.81875 --> 0.82500
[Train] epoch: 50/1000, loss: 0.6558495759963989
[Train] epoch: 100/1000, loss: 0.5948771238327026
[Train] epoch: 150/1000, loss: 0.5388158559799194
[Train] epoch: 200/1000, loss: 0.5058477520942688
[Train] epoch: 250/1000, loss: 0.4894803464412689
[Train] epoch: 300/1000, loss: 0.4813789427280426
[Train] epoch: 350/1000, loss: 0.47720932960510254
[Train] epoch: 400/1000, loss: 0.47499004006385803
[Train] epoch: 450/1000, loss: 0.4737810492515564
[Train] epoch: 500/1000, loss: 0.4731082022190094
[Train] epoch: 550/1000, loss: 0.47272247076034546
[Train] epoch: 600/1000, loss: 0.4724907875061035
[Train] epoch: 650/1000, loss: 0.4723418354988098
[Train] epoch: 700/1000, loss: 0.4722374379634857
[Train] epoch: 750/1000, loss: 0.47215747833251953
[Train] epoch: 800/1000, loss: 0.4720911979675293
[Train] epoch: 850/1000, loss: 0.47203296422958374
[Train] epoch: 900/1000, loss: 0.4719797670841217
[Train] epoch: 950/1000, loss: 0.4719299376010895

NNDL 实验五 前馈神经网络(2)自动梯度计算 & 优化问题_第2张图片

4.3.4 性能评价

使用测试数据对训练完成后的最优模型进行评价,观察模型在测试集上的准确率以及loss情况。

# 模型评价
runner.load_model("best_model.pdparams")
score, loss = runner.evaluate([X_test, y_test])
print("[Test] score/loss: {:.4f}/{:.4f}".format(score, loss))

运行结果:

[Test] score/loss: 0.7600/0.4853


【思考题】

从计算性能、计算结果等多方面比较自定义梯度计算和自动梯度计算,谈谈自己的看法。

如果是自定义梯度计算的话求导并转化为计算机程序的过程十分复杂并且容易出错,导致神经网络的实现效率低下;而自动梯度计算可以由框架中的函数自动计算,不必人工干预,计算准确率和效率都大幅提高。

4.4 优化问题


4.4.1 参数初始化


实现一个神经网络前,需要先初始化模型参数。

如果对每一层的权重和偏置都用0初始化,那么通过第一遍前向计算,所有隐藏层神经元的激活值都相同;在反向传播时,所有权重的更新也都相同,这样会导致隐藏层神经元没有差异性,出现对称权重现象。接下来,将模型参数全都初始化为0,看实验结果。这里重新定义了一个类TwoLayerNet_Zeros,两个线性层的参数全都初始化为0。

import torch
import torch.nn as nn
import matplotlib.pyplot as plt
import torch.nn.functional as F


# 定义多层前馈神经网络
class Model_MLP_L2_V4(torch.nn.Module):
    def __init__(self, input_size, hidden_size,output_size):
        super(Model_MLP_L2_V4, self).__init__()
        self.fc1 = nn.Linear(input_size, hidden_size)
        # w1=torch.normal(0,0.1,size=(hidden_size,input_size),requires_grad=True)
        # self.fc1.weight = nn.Parameter(w1)
        self.fc1.weight=nn.init.constant_(self.fc1.weight,val=0.0)
        # self.fc1.bias = nn.init.constant_(self.fc1.bias, val=1.0)
        self.fc1.bias = nn.init.constant_(self.fc1.bias, val=0.0)
        self.fc2 = nn.Linear(hidden_size, output_size)
        # w2 = torch.normal(0, 0.1, size=(output_size, hidden_size), requires_grad=True)
        # self.fc2.weight = nn.Parameter(w2)
        self.fc2.weight = nn.init.constant_(self.fc2.weight, val=0.0)
        self.fc2.bias = nn.init.constant_(self.fc2.bias, val=0.0)
        # 使用'torch.nn.functional.sigmoid'定义 Logistic 激活函数
        self.act_fn = torch.sigmoid

    # 前向计算
    def forward(self, inputs):
        z1 = self.fc1(inputs.to(torch.float32))
        a1 = self.act_fn(z1)
        z2 = self.fc2(a1)
        a2 = self.act_fn(z2)
        return a2

def print_weights(runner):
    print('The weights of the Layers:')
    for item in runner.model.named_parameters():
        print(item)
    for _, param in enumerate(runner.model.named_parameters()):
        print(param)

利用Runner类训练模型:

# 设置模型
input_size = 2
hidden_size = 5
output_size = 1
model = Model_MLP_L2_V4(input_size=input_size, hidden_size=hidden_size, output_size=output_size)
 
# 设置损失函数
loss_fn = F.binary_cross_entropy
 
# 设置优化器
optimizer = torch.optim.SGD(model.parameters(), lr=0.2)
 
# 设置评价指标
metric = accuracy
 
# 其他参数
epoch = 2000
saved_path = 'best_model.pdparams'
# 实例化RunnerV2类,并传入训练配置
runner = RunnerV2_2(model, optimizer, metric, loss_fn)
 
runner.train([X_train, y_train], [X_dev, y_dev], num_epochs=5, log_epochs=50, save_path="best_model.pdparams", custom_print_log=print_weights)

运行结果:

The weights of the Layers:
('fc1.weight', Parameter containing:
tensor([[ 0.4618, -0.2339],
        [-0.5633,  0.3300],
        [-0.6991, -0.2421],
        [ 0.1939, -0.0767],
        [-0.0565,  0.4028]], requires_grad=True))
('fc1.bias', Parameter containing:
tensor([0.2812, 0.5646, 0.1304, 0.3827, 0.0918], requires_grad=True))
('fc2.weight', Parameter containing:
tensor([[ 0.0198,  0.0295, -0.1418,  0.4028, -0.2293]], requires_grad=True))
('fc2.bias', Parameter containing:
tensor([-0.3413], requires_grad=True))
('fc1.weight', Parameter containing:
tensor([[ 0.4618, -0.2339],
        [-0.5633,  0.3300],
        [-0.6991, -0.2421],
        [ 0.1939, -0.0767],
        [-0.0565,  0.4028]], requires_grad=True))
('fc1.bias', Parameter containing:
tensor([0.2812, 0.5646, 0.1304, 0.3827, 0.0918], requires_grad=True))
('fc2.weight', Parameter containing:
tensor([[ 0.0198,  0.0295, -0.1418,  0.4028, -0.2293]], requires_grad=True))
('fc2.bias', Parameter containing:
tensor([-0.3413], requires_grad=True))
[Evaluate] best accuracy performence has been updated: 0.00000 --> 0.41250
[Train] epoch: 0/5, loss: 0.6785968542098999
The weights of the Layers:
('fc1.weight', Parameter containing:
tensor([[ 0.4620, -0.2341],
        [-0.5630,  0.3297],
        [-0.7005, -0.2408],
        [ 0.1988, -0.0802],
        [-0.0596,  0.4048]], requires_grad=True))
('fc1.bias', Parameter containing:
tensor([0.2812, 0.5647, 0.1303, 0.3831, 0.0914], requires_grad=True))
('fc2.weight', Parameter containing:
tensor([[ 0.0312,  0.0240, -0.1446,  0.4098, -0.2303]], requires_grad=True))
('fc2.bias', Parameter containing:
tensor([-0.3347], requires_grad=True))
('fc1.weight', Parameter containing:
tensor([[ 0.4620, -0.2341],
        [-0.5630,  0.3297],
        [-0.7005, -0.2408],
        [ 0.1988, -0.0802],
        [-0.0596,  0.4048]], requires_grad=True))
('fc1.bias', Parameter containing:
tensor([0.2812, 0.5647, 0.1303, 0.3831, 0.0914], requires_grad=True))
('fc2.weight', Parameter containing:
tensor([[ 0.0312,  0.0240, -0.1446,  0.4098, -0.2303]], requires_grad=True))
('fc2.bias', Parameter containing:
tensor([-0.3347], requires_grad=True))
The weights of the Layers:
('fc1.weight', Parameter containing:
tensor([[ 0.4623, -0.2343],
        [-0.5628,  0.3296],
        [-0.7020, -0.2395],
        [ 0.2036, -0.0839],
        [-0.0626,  0.4068]], requires_grad=True))
('fc1.bias', Parameter containing:
tensor([0.2812, 0.5647, 0.1302, 0.3834, 0.0910], requires_grad=True))
('fc2.weight', Parameter containing:
tensor([[ 0.0421,  0.0181, -0.1477,  0.4166, -0.2316]], requires_grad=True))
('fc2.bias', Parameter containing:
tensor([-0.3288], requires_grad=True))
('fc1.weight', Parameter containing:
tensor([[ 0.4623, -0.2343],
        [-0.5628,  0.3296],
        [-0.7020, -0.2395],
        [ 0.2036, -0.0839],
        [-0.0626,  0.4068]], requires_grad=True))
('fc1.bias', Parameter containing:
tensor([0.2812, 0.5647, 0.1302, 0.3834, 0.0910], requires_grad=True))
('fc2.weight', Parameter containing:
tensor([[ 0.0421,  0.0181, -0.1477,  0.4166, -0.2316]], requires_grad=True))
('fc2.bias', Parameter containing:
tensor([-0.3288], requires_grad=True))
The weights of the Layers:
('fc1.weight', Parameter containing:
tensor([[ 0.4627, -0.2347],
        [-0.5626,  0.3294],
        [-0.7034, -0.2383],
        [ 0.2085, -0.0876],
        [-0.0656,  0.4088]], requires_grad=True))
('fc1.bias', Parameter containing:
tensor([0.2812, 0.5648, 0.1301, 0.3836, 0.0906], requires_grad=True))
('fc2.weight', Parameter containing:
tensor([[ 0.0527,  0.0120, -0.1511,  0.4231, -0.2333]], requires_grad=True))
('fc2.bias', Parameter containing:
tensor([-0.3234], requires_grad=True))
('fc1.weight', Parameter containing:
tensor([[ 0.4627, -0.2347],
        [-0.5626,  0.3294],
        [-0.7034, -0.2383],
        [ 0.2085, -0.0876],
        [-0.0656,  0.4088]], requires_grad=True))
('fc1.bias', Parameter containing:
tensor([0.2812, 0.5648, 0.1301, 0.3836, 0.0906], requires_grad=True))
('fc2.weight', Parameter containing:
tensor([[ 0.0527,  0.0120, -0.1511,  0.4231, -0.2333]], requires_grad=True))
('fc2.bias', Parameter containing:
tensor([-0.3234], requires_grad=True))
The weights of the Layers:
('fc1.weight', Parameter containing:
tensor([[ 0.4633, -0.2352],
        [-0.5624,  0.3293],
        [-0.7048, -0.2369],
        [ 0.2135, -0.0914],
        [-0.0686,  0.4108]], requires_grad=True))
('fc1.bias', Parameter containing:
tensor([0.2812, 0.5648, 0.1301, 0.3838, 0.0902], requires_grad=True))
('fc2.weight', Parameter containing:
tensor([[ 0.0630,  0.0056, -0.1547,  0.4293, -0.2353]], requires_grad=True))
('fc2.bias', Parameter containing:
tensor([-0.3185], requires_grad=True))
('fc1.weight', Parameter containing:
tensor([[ 0.4633, -0.2352],
        [-0.5624,  0.3293],
        [-0.7048, -0.2369],
        [ 0.2135, -0.0914],
        [-0.0686,  0.4108]], requires_grad=True))
('fc1.bias', Parameter containing:
tensor([0.2812, 0.5648, 0.1301, 0.3838, 0.0902], requires_grad=True))
('fc2.weight', Parameter containing:
tensor([[ 0.0630,  0.0056, -0.1547,  0.4293, -0.2353]], requires_grad=True))
('fc2.bias', Parameter containing:
tensor([-0.3185], requires_grad=True))

 可视化训练和验证集上的主准确率和loss变化:

plot(runner, "fw-zero.pdf")

运行结果:NNDL 实验五 前馈神经网络(2)自动梯度计算 & 优化问题_第3张图片

 从输出结果看,二分类准确率为50%左右,说明模型没有学到任何内容。训练和验证loss几乎没有怎么下降。为了避免对称权重现象,可以使用高斯分布或均匀分布初始化神经网络的参数。

4.4.2 梯度消失问题

在神经网络的构建过程中,随着网络层数的增加,理论上网络的拟合能力也应该是越来越好的。但是随着网络变深,参数学习更加困难,容易出现梯度消失问题。

由于Sigmoid型函数的饱和性,饱和区的导数更接近于0,误差经过每一层传递都会不断衰减。当网络层数很深时,梯度就会不停衰减,甚至消失,使得整个网络很难训练,这就是所谓的梯度消失问题。
在深度神经网络中,减轻梯度消失问题的方法有很多种,一种简单有效的方式就是使用导数比较大的激活函数,如:ReLU。

4.4.2.1 模型构建

# 定义多层前馈神经网络
class Model_MLP_L5(nn.Module):
    def __init__(self, input_size, output_size, act='sigmoid', w_init=torch.normal(mean=torch.tensor(0.0), std=torch.tensor(0.01)), b_init=torch.tensor(1.0)):
        super(Model_MLP_L5, self).__init__()
        self.fc1 = torch.nn.Linear(input_size, 3)
        self.fc2 = torch.nn.Linear(3, 3)
        self.fc3 = torch.nn.Linear(3, 3)
        self.fc4 = torch.nn.Linear(3, 3)
        self.fc5 = torch.nn.Linear(3, output_size)
        # 定义网络使用的激活函数
        if act == 'sigmoid':
            self.act = F.sigmoid
        elif act == 'relu':
            self.act = F.relu
        elif act == 'lrelu':
            self.act = F.leaky_relu
        else:
            raise ValueError("Please enter sigmoid relu or lrelu!")
        # 初始化线性层权重和偏置参数
        self.init_weights(w_init, b_init)
 
    # 初始化线性层权重和偏置参数
    def init_weights(self, w_init, b_init):
        # 使用'named_sublayers'遍历所有网络层
        for n, m in self.named_parameters():
            # 如果是线性层,则使用指定方式进行参数初始化
            if isinstance(m, nn.Linear):
                w_init(m.weight)
                b_init(m.bias)
 
    def forward(self, inputs):
        outputs = self.fc1(inputs)
        outputs = self.act(outputs)
        outputs = self.fc2(outputs)
        outputs = self.act(outputs)
        outputs = self.fc3(outputs)
        outputs = self.act(outputs)
        outputs = self.fc4(outputs)
        outputs = self.act(outputs)
        outputs = self.fc5(outputs)
        outputs = F.sigmoid(outputs)
        return outputs

4.4.2.2 使用Sigmoid型函数进行训练

 定义梯度打印函数

def print_grads(runner):
    # 打印每一层的权重的模
    print('The gradient of the Layers:')
    for name, item in runner.model.named_parameters():
        if(len(item.size())==2):
             print(name, torch.norm(input=item, p=2))
torch.manual_seed(102)
# 学习率大小
lr = 0.01
 
# 定义网络,激活函数使用sigmoid
model = Model_MLP_L5(input_size=2, output_size=1, act='sigmoid')
 
# 定义优化器
optimizer = torch.optim.SGD(model.parameters(), lr)
 
# 定义损失函数,使用交叉熵损失函数
loss_fn = F.binary_cross_entropy
 
# 定义评价指标
metric = accuracy
 
# 指定梯度打印函数
custom_print_log = print_grads

  实例化Runner类,并传入训练配置

runner = RunnerV2_2(model, optimizer, metric, loss_fn)

模型训练,打印网络每层梯度值的ℓ2范数

runner.train([X_train, y_train], [X_dev, y_dev], 
            num_epochs=1, log_epochs=None, 
            save_path="best_model.pdparams", 
            custom_print_log=custom_print_log)

运行结果:

The gradient of the Layers:
fc1.weight tensor(1.0447, grad_fn=)
fc2.weight tensor(1.2803, grad_fn=)
fc3.weight tensor(0.8694, grad_fn=)
fc4.weight tensor(1.0071, grad_fn=)
fc5.weight tensor(0.5389, grad_fn=)
[Evaluate] best accuracy performence has been updated: 0.00000 --> 0.53125

 观察实验结果可以发现,梯度经过每一个神经层的传递都会不断衰减,最终传递到第一个神经层时,梯度几乎完全消失。

4.4.2.3 使用ReLU函数进行模型训练

torch.manual_seed(102)
lr = 0.01  # 学习率大小
 
# 定义网络,激活函数使用relu
model =Model_MLP_L5(input_size=2, output_size=1, act='relu')
 
# 定义优化器
optimizer = torch.optim.SGD(model.parameters(), lr)
 
# 定义损失函数
# 定义损失函数,这里使用交叉熵损失函数
loss_fn = F.binary_cross_entropy
 
# 定义评估指标
metric = accuracy
 
# 实例化Runner
runner = RunnerV2_2(model, optimizer, metric, loss_fn)
 
# 启动训练
runner.train([X_train, y_train], [X_dev, y_dev],
            num_epochs=1, log_epochs=None,
            save_path="best_model.pdparams",
            custom_print_log=custom_print_log)

运行结果:

 

The gradient of the Layers:
fc1.weight tensor(0.8176, grad_fn=)
fc2.weight tensor(0.9802, grad_fn=)
fc3.weight tensor(0.9874, grad_fn=)
fc4.weight tensor(1.0451, grad_fn=)
fc5.weight tensor(0.4850, grad_fn=)
[Evaluate] best accuracy performence has been updated: 0.00000 --> 0.53125

下图展示了使用不同激活函数时,网络每层梯度值的ℓ2范数情况。从结果可以看到,5层的全连接前馈神经网络使用Sigmoid型函数作为激活函数时,梯度经过每一个神经层的传递都会不断衰减,最终传递到第一个神经层时,梯度几乎完全消失。改为ReLU激活函数后,梯度消失现象得到了缓解,每一层的参数都具有梯度值。 

NNDL 实验五 前馈神经网络(2)自动梯度计算 & 优化问题_第4张图片 

4.4.3 死亡ReLU问题


ReLU激活函数可以一定程度上改善梯度消失问题,但是在某些情况下容易出现死亡ReLU问题,使得网络难以训练。

这是由于当x<0x<0时,ReLU函数的输出恒为0。在训练过程中,如果参数在一次不恰当的更新后,某个ReLU神经元在所有训练数据上都不能被激活(即输出为0),那么这个神经元自身参数的梯度永远都会是0,在以后的训练过程中永远都不能被激活。

一种简单有效的优化方式就是将激活函数更换为Leaky ReLU、ELU等ReLU的变种。


4.4.3.1 使用ReLU进行模型训练

# 定义网络,并使用较大的负值来初始化偏置
model = Model_MLP_L5(input_size=2, output_size=1, act='relu', b_init=torch.tensor(-0.8))

实例化RunnerV2类,启动模型训练,打印网络每层梯度值的ℓ 2范数。

# 实例化Runner类
runner = RunnerV2_2(model, optimizer, metric, loss_fn)
 
# 启动训练
runner.train([X_train, y_train], [X_dev, y_dev],
            num_epochs=1, log_epochs=0,
            save_path="best_model.pdparams",
            custom_print_log=custom_print_log)

运行结果:

The gradient of the Layers:
linear_14 0.0
linear_15 0.0
linear_16 0.0
linear_17 0.0
linear_18 0.0
[Evaluate] best accuracy performence has been updated: 0.00000 --> 0.53750

从输出结果可以发现,使用 ReLU 作为激活函数,当满足条件时,会发生死亡ReLU问题,网络训练过程中 ReLU 神经元的梯度始终为0,参数无法更新。针对死亡ReLU问题,一种简单有效的优化方式就是将激活函数更换为Leaky ReLU、ELU等ReLU 的变种。接下来,观察将激活函数更换为 Leaky ReLU时的梯度情况。


4.4.3.2 使用Leaky ReLU进行模型训练

将激活函数更换为Leaky ReLU进行模型训练,观察梯度情况

# 重新定义网络,使用Leaky ReLU激活函数
model =  Model_MLP_L5(input_size=2, output_size=1, act='lrelu', b_init=torch.tensor(-0.8))
 
# 实例化Runner类
runner = RunnerV2_2(model, optimizer, metric, loss_fn)
 
# 启动训练
runner.train([X_train, y_train], [X_dev, y_dev],
            num_epochs=1, log_epochps=None,
            save_path="best_model.pdparams",
            custom_print_log=custom_print_log)

运行结果:

The gradient of the Layers:
fc1.weight tensor(0.7548, grad_fn=)
fc2.weight tensor(1.1612, grad_fn=)
fc3.weight tensor(1.0495, grad_fn=)
fc4.weight tensor(1.0805, grad_fn=)
fc5.weight tensor(0.5799, grad_fn=)
[Evaluate] best accuracy performence has been updated: 0.00000 --> 0.4965
[Train] epoch: 0/1, loss: 0.7061845328474692

从输出结果可以看到,将激活函数更换为Leaky ReLU后,死亡ReLU问题得到了改善,梯度恢复正常,参数也可以正常更新。但是由于 Leaky ReLU 中,x < 0 时的斜率默认只有0.01,所以反向传播时,随着网络层数的加深,梯度值越来越小。如果想要改善这一现象,将 Leaky ReLU 中,x < 0 时的斜率调大即可。
心得体会:本次实验学习了用自动梯度计算实现前馈神经网络和优化问题,也加深了paddle与pytorch间部分函数转换的印象,实验过程中比较吃力,还需要继续认真理解与学习。

参考: NNDL 实验4(上) - HBU_DAVID - 博客园 (cnblogs.com)

NNDL 实验五 前馈神经网络(2)自动梯度计算 & 优化问题

 Pytorch自定义层或者模型类

你可能感兴趣的:(NNDL 实验五 前馈神经网络(2)自动梯度计算 & 优化问题)