LSTM的记忆能力实验

        

        

目录

1 模型构建

        1.1 LSTM层

2 模型训练

        2.1 训练指定长度的数字预测模型

        2.2 多组训练

        2.3 损失曲线展示

3 模型评价

        3.1 在测试集上进行模型评价

        3.2 模型在不同长度的数据集上的准确率变化图

        3.3 LSTM模型门状态和单元状态的变化

        总结


        长短期记忆网络(Long Short-Term Memory Network,LSTM)是一种可以有效缓解长程依赖问题的循环神经网络.LSTM 的特点是引入了一个新的内部状态(Internal State)$c \in \mathbb{R}^D$和门控机制(Gating Mechanism).不同时刻的内部状态以近似线性的方式进行传递,从而缓解梯度消失或梯度爆炸问题.同时门控机制进行信息筛选,可以有效地增加记忆能力.例如,输入门可以让网络忽略无关紧要的输入信息,遗忘门可以使得网络保留有用的历史信息.在上一节的数字求和任务中,如果模型能够记住前两个非零数字,同时忽略掉一些不重要的干扰信息,那么即时序列很长,模型也有效地进行预测.

        LSTM 模型在第 $t$ 步时,循环单元的内部结构如图所示.

LSTM的记忆能力实验_第1张图片 LSTM网络的循环单元结构

        假设一组输入序列为 $\boldsymbol{X}\in \mathbb{R}^{B\times L\times M}$ ,其中 $B$ 为批大小, $L$ 为序列长度, $M$ 为输入特征维度,LSTM从从左到右依次扫描序列,并通过循环单元计算更新每一时刻的状态内部状态 $\boldsymbol{C}_{t} \in \mathbb{R}^{B \times D}$ 和输出状态 $\boldsymbol{H}_{t} \in \mathbb{R}^{B \times D}$

具体计算分为三步:

(1)计算三个“门”

        在时刻 $t$ ,LSTM的循环单元将当前时刻的输入 $\boldsymbol{X}_t \in \mathbb{R}^{B \times M}$ 与上一时刻的输出状态 $\boldsymbol{H}_{t-1} \in \mathbb{R}^{B \times D}$ ,计算一组输入门 $\boldsymbol{I}_t$ 、遗忘门 $\boldsymbol{F}_t$ 和输出门 $\boldsymbol{O}_t$ ,其计算公式为

\boldsymbol{I}_{t}=\sigma(\boldsymbol{X}_t\boldsymbol{W}_i+\boldsymbol{H}_{t-1}\boldsymbol{U}_i+\boldsymbol{b}_i) \in \mathbb{R}^{B \times D}\\ \boldsymbol{F}_{t}=\sigma(\boldsymbol{X}_t\boldsymbol{W}_f+\boldsymbol{H}_{t-1}\boldsymbol{U}_f+\boldsymbol{b}_f) \in \mathbb{R}^{B \times D}\\ \boldsymbol{O}_{t}=\sigma(\boldsymbol{X}_t\boldsymbol{W}_o+\boldsymbol{H}_{t-1}\boldsymbol{U}_o+\boldsymbol{b}_o) \in \mathbb{R}^{B \times D}

        其中 $\boldsymbol{W}_* \in \mathbb{R}^{M \times D},\boldsymbol{U}_* \in \mathbb{R}^{D \times D},\boldsymbol{b}_* \in \mathbb{R}^{D}$ 为可学习的参数,$\sigma$ 表示Logistic函数,将“门”的取值控制在 $(0,1)$ 区间。这里的“门”都是 $B$ 个样本组成的矩阵,每一行为一个样本的“门”向量。

(2)计算内部状态

首先计算候选内部状态:

\tilde{\boldsymbol{C}}_{t}=\tanh(\boldsymbol{X}_t\boldsymbol{W}_c+\boldsymbol{H}_{t-1}\boldsymbol{U}_c+\boldsymbol{b}_c) \in \mathbb{R}^{B \times D}

其中 $\boldsymbol{W}_c \in \mathbb{R}^{M \times D}, \boldsymbol{U}_c \in \mathbb{R}^{D \times D},\boldsymbol{b}_c \in \mathbb{R}^{D}$ 为可学习的参数。

使用遗忘门和输入门,计算时刻 $t$ 的内部状态。

\boldsymbol{C}_{t} = \boldsymbol{F}_t \odot \boldsymbol{C}_{t-1} + \boldsymbol{I}_{t} \odot \boldsymbol{\tilde{C}}_{t}

其中 $\odot$ 为逐元素积。

(3)计算输出状态

当前LSTM单元状态(候选状态)的计算公式为:

LSTM单元状态向量 $\boldsymbol{C}_{t}$ 和 $\boldsymbol{H}_t$ 的计算公式为

\boldsymbol{C}_{t} = \boldsymbol F_t \odot \boldsymbol{C}_{t-1} + \boldsymbol{I}_{t} \odot \boldsymbol{\tilde{C}}_{t}\\ \boldsymbol{H}_{t} = \boldsymbol{O}_{t} \odot \text{tanh}(\boldsymbol{C}_{t})

LSTM循环单元结构的输入是 $t-1$ 时刻内部状态向量 $\boldsymbol{C}_{t-1} \in \mathbb{R}^{B \times D}$ 和隐状态向量 $\boldsymbol{H}_{t-1} \in \mathbb{R}^{B \times D}$ ,输出是当前时刻 $t$ 的状态向量 $\boldsymbol{C}_{t} \in \mathbb{R}^{B \times D}$ 和隐状态向量 $\boldsymbol{H}_{t} \in \mathbb{R}^{B \times D}$ 。通过LSTM循环单元,整个网络可以建立较长距离的时序依赖关系。

通过学习这些门的设置,LSTM可以选择性地忽略或者强化当前的记忆或是输入信息,帮助网络更好地学习长句子的语义信息。

在本节中,我们使用LSTM模型重新进行数字求和实验,验证LSTM模型的长程依赖能力。

1 模型构建

        1.1 LSTM层

        LSTM层的代码与SRN层结构相似,只是在SRN层的基础上增加了内部状态、输入门、遗忘门和输出门的定义和计算。这里LSTM层的输出也依然为序列的最后一个位置的隐状态向量。代码实现如下:

class LSTM(nn.Module):
    def __init__(self, input_size, hidden_size, Wi_attr=None, Wf_attr=None, Wo_attr=None, Wc_attr=None,
                 Ui_attr=None, Uf_attr=None, Uo_attr=None, Uc_attr=None, bi_attr=None, bf_attr=None,
                 bo_attr=None, bc_attr=None):
        super(LSTM, self).__init__()
        self.input_size = input_size
        self.hidden_size = hidden_size

        # 初始化模型参数
        if Wi_attr is None:
            Wi = torch.zeros(size=[input_size, hidden_size], dtype=torch.float32)
        else:
            Wi = torch.tensor(Wi_attr, dtype=torch.float32)
        self.W_i = torch.nn.Parameter(Wi)

        if Wf_attr is None:
            Wf = torch.zeros(size=[input_size, hidden_size], dtype=torch.float32)
        else:
            Wf = torch.tensor(Wf_attr, dtype=torch.float32)
        self.W_f = torch.nn.Parameter(Wf)

        if Wo_attr is None:
            Wo = torch.zeros(size=[input_size, hidden_size], dtype=torch.float32)
        else:
            Wo = torch.tensor(Wo_attr, dtype=torch.float32)
        self.W_o = torch.nn.Parameter(Wo)

        if Wc_attr is None:
            Wc = torch.zeros(size=[input_size, hidden_size], dtype=torch.float32)
        else:
            Wc = torch.tensor(Wc_attr, dtype=torch.float32)
        self.W_c = torch.nn.Parameter(Wc)

        if Ui_attr is None:
            Ui = torch.zeros(size=[hidden_size, hidden_size], dtype=torch.float32)
        else:
            Ui = torch.tensor(Ui_attr, dtype=torch.float32)
        self.U_i = torch.nn.Parameter(Ui)

        if Uf_attr is None:
            Uf = torch.zeros(size=[hidden_size, hidden_size], dtype=torch.float32)
        else:
            Uf = torch.tensor(Uf_attr, dtype=torch.float32)
        self.U_f = torch.nn.Parameter(Uf)

        if Uo_attr is None:
            Uo = torch.zeros(size=[hidden_size, hidden_size], dtype=torch.float32)
        else:
            Uo = torch.tensor(Uo_attr, dtype=torch.float32)
        self.U_o = torch.nn.Parameter(Uo)

        if Uc_attr is None:
            Uc = torch.zeros(size=[hidden_size, hidden_size], dtype=torch.float32)
        else:
            Uc = torch.tensor(Uc_attr, dtype=torch.float32)
        self.U_c = torch.nn.Parameter(Uc)

        if bi_attr is None:
            bi = torch.zeros(size=[1, hidden_size], dtype=torch.float32)
        else:
            bi = torch.tensor(bi_attr, dtype=torch.float32)
        self.b_i = torch.nn.Parameter(bi)

        if bf_attr is None:
            bf = torch.zeros(size=[1, hidden_size], dtype=torch.float32)
        else:
            bf = torch.tensor(bf_attr, dtype=torch.float32)
        self.b_f = torch.nn.Parameter(bf)

        if bo_attr is None:
            bo = torch.zeros(size=[1, hidden_size], dtype=torch.float32)
        else:
            bo = torch.tensor(bo_attr, dtype=torch.float32)
        self.b_o = torch.nn.Parameter(bo)

        if bc_attr is None:
            bc = torch.zeros(size=[1, hidden_size], dtype=torch.float32)
        else:
            bc = torch.tensor(bc_attr, dtype=torch.float32)
        self.b_c = torch.nn.Parameter(bc)

    # 初始化状态向量和隐状态向量
    def init_state(self, batch_size):
        hidden_state = torch.zeros(size=[batch_size, self.hidden_size], dtype=torch.float32)
        cell_state = torch.zeros(size=[batch_size, self.hidden_size], dtype=torch.float32)
        return hidden_state, cell_state

    # 定义前向计算
    def forward(self, inputs, states=None):
        # inputs: 输入数据,其shape为batch_size x seq_len x input_size
        batch_size, seq_len, input_size = inputs.shape

        # 初始化起始的单元状态和隐状态向量,其shape为batch_size x hidden_size
        if states is None:
            states = self.init_state(batch_size)
        hidden_state, cell_state = states

        # 执行LSTM计算,包括:输入门、遗忘门和输出门、候选内部状态、内部状态和隐状态向量
        for step in range(seq_len):
            # 获取当前时刻的输入数据step_input: 其shape为batch_size x input_size
            step_input = inputs[:, step, :]
            # 计算输入门, 遗忘门和输出门, 其shape为:batch_size x hidden_size
            I_gate = F.sigmoid(torch.matmul(step_input, self.W_i) + torch.matmul(hidden_state, self.U_i) + self.b_i)
            F_gate = F.sigmoid(torch.matmul(step_input, self.W_f) + torch.matmul(hidden_state, self.U_f) + self.b_f)
            O_gate = F.sigmoid(torch.matmul(step_input, self.W_o) + torch.matmul(hidden_state, self.U_o) + self.b_o)
            # 计算候选状态向量, 其shape为:batch_size x hidden_size
            C_tilde = F.tanh(torch.matmul(step_input, self.W_c) + torch.matmul(hidden_state, self.U_c) + self.b_c)
            # 计算单元状态向量, 其shape为:batch_size x hidden_size
            cell_state = F_gate * cell_state + I_gate * C_tilde
            # 计算隐状态向量,其shape为:batch_size x hidden_size
            hidden_state = O_gate * F.tanh(cell_state)

        return hidden_state
Wi_attr = [[0.1, 0.2], [0.1, 0.2]]
Wf_attr = [[0.1, 0.2], [0.1, 0.2]]
Wo_attr = [[0.1, 0.2], [0.1, 0.2]]
Wc_attr = [[0.1, 0.2], [0.1, 0.2]]
Ui_attr = [[0.0, 0.1], [0.1, 0.0]]
Uf_attr = [[0.0, 0.1], [0.1, 0.0]]
Uo_attr = [[0.0, 0.1], [0.1, 0.0]]
Uc_attr = [[0.0, 0.1], [0.1, 0.0]]
bi_attr = [[0.1, 0.1]]
bf_attr = [[0.1, 0.1]]
bo_attr = [[0.1, 0.1]]
bc_attr = [[0.1, 0.1]]

lstm = LSTM(2, 2, Wi_attr=Wi_attr, Wf_attr=Wf_attr, Wo_attr=Wo_attr, Wc_attr=Wc_attr,
            Ui_attr=Ui_attr, Uf_attr=Uf_attr, Uo_attr=Uo_attr, Uc_attr=Uc_attr,
            bi_attr=bi_attr, bf_attr=bf_attr, bo_attr=bo_attr, bc_attr=bc_attr)

inputs = torch.as_tensor([[[1, 0]]], dtype=torch.float32)
hidden_state = lstm(inputs)
print(hidden_state)

         输出结果:

  • nn.LSTM
# 这里创建一个随机数组作为测试数据,数据shape为batch_size x seq_len x input_size
batch_size, seq_len, input_size = 8, 20, 32
inputs = torch.randn(size=[batch_size, seq_len, input_size])

# 设置模型的hidden_size
hidden_size = 32
torch_lstm = nn.LSTM(input_size, hidden_size)
self_lstm = LSTM(input_size, hidden_size)

self_hidden_state = self_lstm(inputs)
torch_outputs, (torch_hidden_state, torch_cell_state) = torch_lstm(inputs)

print("self_lstm hidden_state: ", self_hidden_state.shape)
print("torch_lstm outpus:", torch_outputs.shape)
print("torch_lstm hidden_state:", torch_hidden_state.shape)
print("torch_lstm cell_state:", torch_cell_state.shape)

        输出结果为:

        比较自己手写LSTM和nn.LSTM的输出,代码如下:

import torch
torch.seed()

# 这里创建一个随机数组作为测试数据,数据shape为batch_size x seq_len x input_size
batch_size, seq_len, input_size, hidden_size = 2, 5, 10, 10
inputs = torch.randn([batch_size, seq_len, input_size])

# 设置模型的hidden_size
torch_lstm = nn.LSTM(input_size, hidden_size, bias=True)
# 获取torch_lstm中的参数,并设置相应的paramAttr,用于初始化lstm
print(torch_lstm.weight_ih_l0.T.shape)
chunked_W = torch.split(torch_lstm.weight_ih_l0.T, split_size_or_sections=hidden_size, dim=-1)
# 因为手写LSTM算子与输入相关的权重矩阵大小为input_size * hidden_size
# torch.nn.LSTM 在之前我们提到过,矩阵大小为 [4 * hidden_size, input_size] 所以如果需要将模型的参数取出来
# 首先需要转置 [input_size, 4 * hidden_size] 并且调用分割函数,对列按照每hidden_size大小分割即可
chunked_U = torch.split(torch_lstm.weight_hh_l0.T, split_size_or_sections=hidden_size, dim=-1)
chunked_b = torch.split(torch_lstm.bias_hh_l0, split_size_or_sections=hidden_size)
# torch_lstm.bias_hh_l0 因为大小为tensor[4 * hidden_size] 属于一维所以不用转置
Wi_attr = chunked_W[0].detach().numpy()
Wf_attr = chunked_W[1].detach().numpy()
Wc_attr = chunked_W[2].detach().numpy()
Wo_attr = chunked_W[3].detach().numpy()
Ui_attr = chunked_U[0].detach().numpy()
Uf_attr = chunked_U[1].detach().numpy()
Uc_attr = chunked_U[2].detach().numpy()
Uo_attr = chunked_U[3].detach().numpy()
bi_attr = chunked_b[0].detach().numpy()
bf_attr = chunked_b[1].detach().numpy()
bc_attr = chunked_b[2].detach().numpy()
bo_attr = chunked_b[3].detach().numpy()
self_lstm = LSTM(input_size, hidden_size, Wi_attr=Wi_attr, Wf_attr=Wf_attr, Wo_attr=Wo_attr, Wc_attr=Wc_attr,
                 Ui_attr=Ui_attr, Uf_attr=Uf_attr, Uo_attr=Uo_attr, Uc_attr=Uc_attr,
                 bi_attr=bi_attr, bf_attr=bf_attr, bo_attr=bo_attr, bc_attr=bc_attr)

# 进行前向计算,获取隐状态向量,并打印展示
self_hidden_state = self_lstm(inputs)
torch_outputs, (torch_hidden_state, _) = torch_lstm(inputs)
print("torch SRN:\n", torch_hidden_state.detach().numpy().squeeze(0))
print("self SRN:\n", self_hidden_state.detach().numpy())

        输出如下:

LSTM的记忆能力实验_第2张图片

        可以看到,两者的输出基本是一致的。另外,还可以进行对比两者在运算速度方面的差异。代码实现如下: 

import time

# 这里创建一个随机数组作为测试数据,数据shape为batch_size x seq_len x input_size
batch_size, seq_len, input_size = 8, 20, 32
inputs = torch.randn([batch_size, seq_len, input_size])

# 设置模型的hidden_size
hidden_size = 32
self_lstm = LSTM(input_size, hidden_size)
torch_lstm = nn.LSTM(input_size, hidden_size)

# 计算自己实现的SRN运算速度
model_time = 0
for i in range(100):
    strat_time = time.time()
    hidden_state = self_lstm(inputs)
    # 预热10次运算,不计入最终速度统计
    if i < 10:
        continue
    end_time = time.time()
    model_time += (end_time - strat_time)
avg_model_time = model_time / 90
print('self_lstm speed:', avg_model_time, 's')

# 计算torch内置的SRN运算速度
model_time = 0
for i in range(100):
    strat_time = time.time()
    outputs, (hidden_state, cell_state) = torch_lstm(inputs)
    # 预热10次运算,不计入最终速度统计
    if i < 10:
        continue
    end_time = time.time()
    model_time += (end_time - strat_time)
avg_model_time = model_time / 90
print('torch_lstm speed:', avg_model_time, 's')

        输出结果如下:

         Pytorch框架内置的LSTM运行效率远远高于自己实现的LSTM。

2 模型训练

        2.1 训练指定长度的数字预测模型

        本节将基于RunnerV3类进行训练,首先定义模型训练的超参数,并保证和简单循环网络的超参数一致. 然后定义一个train函数,其可以通过指定长度的数据集,并进行训练. 在train函数中,首先加载长度为length的数据,然后实例化各项组件并创建对应的Runner,然后训练该Runner。代码实现如下:

# 训练轮次
num_epochs = 500
# 学习率
lr = 0.001
# 输入数字的类别数
num_digits = 10
# 将数字映射为向量的维度
input_size = 32
# 隐状态向量的维度
hidden_size = 32
# 预测数字的类别数
num_classes = 19
# 批大小
batch_size = 8
# 模型保存目录
save_dir = "./checkpoints"

# 可以设置不同的length进行不同长度数据的预测实验
def train(length):
    print(f"\n====> Training LSTM with data of length {length}.")
    np.random.seed(0)
    random.seed(0)

    # 加载长度为length的数据
    data_path = f"./datasets/{length}"
    train_examples, dev_examples, test_examples = load_data(data_path)
    train_set, dev_set, test_set = DigitSumDataset(train_examples), DigitSumDataset(dev_examples), DigitSumDataset(test_examples)
    train_loader = DataLoader(train_set, batch_size=batch_size)
    dev_loader = DataLoader(dev_set, batch_size=batch_size)
    test_loader = DataLoader(test_set, batch_size=batch_size)
    # 实例化模型
    base_model = LSTM(input_size, hidden_size)
    model = Model_RNN4SeqClass(base_model, num_digits, input_size, hidden_size, num_classes)
    # 指定优化器
    optimizer = torch.optim.Adam(lr=lr, params=model.parameters())
    # 定义评价指标
    metric = Accuracy()
    # 定义损失函数
    loss_fn = torch.nn.CrossEntropyLoss()
    # 基于以上组件,实例化Runner
    runner = RunnerV3(model, optimizer, loss_fn, metric)

    # 进行模型训练
    model_save_path = os.path.join(save_dir, f"best_lstm_model_{length}.pdparams")
    runner.train(train_loader, dev_loader, num_epochs=num_epochs, eval_steps=100, log_steps=100, save_path=model_save_path)

    return runner

        2.2 多组训练

        接下来,分别进行数据长度为10, 15, 20, 25, 30, 35的数字预测模型训练实验,训练后的runner保存至runners字典中

lstm_runners = {}

lengths = [10, 15, 20, 25, 30, 35]
for length in lengths:
    runner = train(length)
    lstm_runners[length] = runner

        2.3 损失曲线展示

        分别画出基于LSTM的各个长度的数字预测模型训练过程中,在训练集和验证集上的损失曲线,代码实现如下:

# 画出训练过程中的损失图
for length in lengths:
    runner = lstm_runners[length]
    fig_name = f"./images/6.11_{length}.pdf"
    plot_training_loss(runner, fig_name, sample_step=100)

        输出结果如下:

LSTM的记忆能力实验_第3张图片

 LSTM的记忆能力实验_第4张图片

LSTM的记忆能力实验_第5张图片

LSTM的记忆能力实验_第6张图片

LSTM的记忆能力实验_第7张图片

LSTM的记忆能力实验_第8张图片

3 模型评价

        3.1 在测试集上进行模型评价

        使用测试数据对在训练过程中保存的最好模型进行评价,观察模型在测试集上的准确率. 同时获取模型在训练过程中在验证集上最好的准确率,实现代码如下:

lstm_dev_scores = []
lstm_test_scores = []
for length in lengths:
    print(f"Evaluate LSTM with data length {length}.")
    runner = lstm_runners[length]
    # 加载训练过程中效果最好的模型
    model_path = os.path.join(save_dir, f"best_lstm_model_{length}.pdparams")
    runner.load_model(model_path)

    # 加载长度为length的数据
    data_path = f"./datasets/{length}"
    train_examples, dev_examples, test_examples = load_data(data_path)
    test_set = DigitSumDataset(test_examples)
    test_loader = DataLoader(test_set, batch_size=batch_size)

    # 使用测试集评价模型,获取测试集上的预测准确率
    score, _ = runner.evaluate(test_loader)
    lstm_test_scores.append(score)
    lstm_dev_scores.append(max(runner.dev_scores))

for length, dev_score, test_score in zip(lengths, lstm_dev_scores, lstm_test_scores):
    print(f"[LSTM] length:{length}, dev_score: {dev_score}, test_score: {test_score: .5f}")

         输出结果如下:LSTM的记忆能力实验_第9张图片

        3.2 模型在不同长度的数据集上的准确率变化图

plt.plot(lengths, lstm_dev_scores, '-o', color='#e8609b',  label="LSTM Dev Accuracy")
plt.plot(lengths, lstm_test_scores,'-o', color='#000000', label="LSTM Test Accuracy")

#绘制坐标轴和图例
plt.ylabel("accuracy", fontsize='large')
plt.xlabel("sequence length", fontsize='large')
plt.legend(loc='lower left', fontsize='x-large')

fig_name = "./images/6.12.pdf"
plt.savefig(fig_name)
plt.show()

        输出如下:

LSTM的记忆能力实验_第10张图片

        3.3 LSTM模型门状态和单元状态的变化

        LSTM模型通过门控机制控制信息的单元状态的更新,这里可以观察当LSTM在处理一条数字序列的时候,相应门和单元状态是如何变化的。首先需要对以上LSTM模型实现代码中,定义相应列表进行存储这些门和单元状态在每个时刻的向量。

class LSTM(nn.Module):
    def __init__(self, input_size, hidden_size, Wi_attr=None, Wf_attr=None, Wo_attr=None, Wc_attr=None,
                 Ui_attr=None, Uf_attr=None, Uo_attr=None, Uc_attr=None, bi_attr=None, bf_attr=None,
                 bo_attr=None, bc_attr=None):
        super(LSTM, self).__init__()
        self.input_size = input_size
        self.hidden_size = hidden_size

        # 初始化模型参数
        if Wi_attr == None:
            Wi = torch.zeros(size=[input_size, hidden_size], dtype=torch.float32)
        else:
            Wi = torch.tensor(Wi_attr, dtype=torch.float32)
        self.W_i = torch.nn.Parameter(Wi)

        if Wf_attr == None:
            Wf = torch.zeros(size=[input_size, hidden_size], dtype=torch.float32)
        else:
            Wf = torch.tensor(Wf_attr, dtype=torch.float32)
        self.W_f = torch.nn.Parameter(Wf)

        if Wo_attr == None:
            Wo = torch.zeros(size=[input_size, hidden_size], dtype=torch.float32)
        else:
            Wo = torch.tensor(Wo_attr, dtype=torch.float32)
        self.W_o = torch.nn.Parameter(Wo)

        if Wc_attr == None:
            Wc = torch.zeros(size=[input_size, hidden_size], dtype=torch.float32)
        else:
            Wc = torch.tensor(Wc_attr, dtype=torch.float32)
        self.W_c = torch.nn.Parameter(Wc)

        if Ui_attr == None:
            Ui = torch.zeros(size=[hidden_size, hidden_size], dtype=torch.float32)
        else:
            Ui = torch.tensor(Ui_attr, dtype=torch.float32)
        self.U_i = torch.nn.Parameter(Ui)
        if Uf_attr == None:
            Uf = torch.zeros(size=[hidden_size, hidden_size], dtype=torch.float32)
        else:
            Uf = torch.tensor(Uf_attr, dtype=torch.float32)
        self.U_f = torch.nn.Parameter(Uf)

        if Uo_attr == None:
            Uo = torch.zeros(size=[hidden_size, hidden_size], dtype=torch.float32)
        else:
            Uo = torch.tensor(Uo_attr, dtype=torch.float32)
        self.U_o = torch.nn.Parameter(Uo)

        if Uc_attr == None:
            Uc = torch.zeros(size=[hidden_size, hidden_size], dtype=torch.float32)
        else:
            Uc = torch.tensor(Uc_attr, dtype=torch.float32)
        self.U_c = torch.nn.Parameter(Uc)

        if bi_attr == None:
            bi = torch.zeros(size=[1, hidden_size], dtype=torch.float32)
        else:
            bi = torch.tensor(bi_attr, dtype=torch.float32)
        self.b_i = torch.nn.Parameter(bi)
        if bf_attr == None:
            bf = torch.zeros(size=[1, hidden_size], dtype=torch.float32)
        else:
            bf = torch.tensor(bf_attr, dtype=torch.float32)
        self.b_f = torch.nn.Parameter(bf)
        if bo_attr == None:
            bo = torch.zeros(size=[1, hidden_size], dtype=torch.float32)
        else:
            bo = torch.tensor(bo_attr, dtype=torch.float32)
        self.b_o = torch.nn.Parameter(bo)
        if bc_attr == None:
            bc = torch.zeros(size=[1, hidden_size], dtype=torch.float32)
        else:
            bc = torch.tensor(bc_attr, dtype=torch.float32)
        self.b_c = torch.nn.Parameter(bc)

    # 初始化状态向量和隐状态向量
    def init_state(self, batch_size):
        hidden_state = torch.zeros(size=[batsize, self.hidden_size], dtype=torch.float32)
        cell_state = torch.zeros(size=[batch_ch_size, self.hidden_size], dtype=torch.float32)
        return hidden_state, cell_state

    # 定义前向计算
    def forward(self, inputs, states=None):
        # inputs: 输入数据,其shape为batch_size x seq_len x input_size
        batch_size, seq_len, input_size = inputs.shape

        # 初始化起始的单元状态和隐状态向量,其shape为batch_size x hidden_size
        if states is None:
            states = self.init_state(batch_size)
        hidden_state, cell_state = states

        # 定义相应的门状态和单元状态向量列表
        self.Is = []
        self.Fs = []
        self.Os = []
        self.Cs = []
        # 初始化状态向量和隐状态向量
        cell_state = torch.zeros(size=[batch_size, self.hidden_size], dtype=torch.float32)
        hidden_state = torch.zeros(size=[batch_size, self.hidden_size], dtype=torch.float32)

        # 执行LSTM计算,包括:隐藏门、输入门、遗忘门、候选状态向量、状态向量和隐状态向量
        for step in range(seq_len):
            input_step = inputs[:, step, :]
            I_gate = F.sigmoid(torch.matmul(input_step, self.W_i) + torch.matmul(hidden_state, self.U_i) + self.b_i)
            F_gate = F.sigmoid(torch.matmul(input_step, self.W_f) + torch.matmul(hidden_state, self.U_f) + self.b_f)
            O_gate = F.sigmoid(torch.matmul(input_step, self.W_o) + torch.matmul(hidden_state, self.U_o) + self.b_o)
            C_tilde = F.tanh(torch.matmul(input_step, self.W_c) + torch.matmul(hidden_state, self.U_c) + self.b_c)
            cell_state = F_gate * cell_state + I_gate * C_tilde
            hidden_state = O_gate * F.tanh(cell_state)
            # 存储门状态向量和单元状态向量
            self.Is.append(I_gate.detach().numpy().copy())
            self.Fs.append(F_gate.detach().numpy().copy())
            self.Os.append(O_gate.detach().numpy().copy())
            self.Cs.append(cell_state.detach().numpy().copy())
        return hidden_state

        接下来,需要使用新的LSTM模型,重新实例化一个runner,本节使用序列长度为10的模型进行此项实验,因此需要加载序列长度为10的模型。

# 实例化模型
base_model = LSTM(input_size, hidden_size)
model = Model_RNN4SeqClass(base_model, num_digits, input_size, hidden_size, num_classes)
# 指定优化器
optimizer = torch.optim.Adam(lr=lr, params=model.parameters())
# 定义评价指标
metric = Accuracy()
# 定义损失函数
loss_fn = torch.nn.CrossEntropyLoss()
# 基于以上组件,重新实例化Runner
runner = RunnerV3(model, optimizer, loss_fn, metric)

length = 10
# 加载训练过程中效果最好的模型
model_path = os.path.join(save_dir, f"best_lstm_model_{length}.pdparams")
runner.load_model(model_path)

        接下来,给定一条数字序列,并使用数字预测模型进行数字预测,这样便会将相应的门状态和单元状态向量保存至模型中. 然后分别从模型中取出这些向量,并将这些向量进行绘制展示。代码实现如下:

import seaborn as sns
import matplotlib.pyplot as plt
def plot_tensor(inputs, tensor,  save_path, vmin=0, vmax=1):
    tensor = np.stack(tensor, axis=0)
    tensor = np.squeeze(tensor, 1).T

    plt.figure(figsize=(16,6))
    # vmin, vmax定义了色彩图的上下界
    ax = sns.heatmap(tensor, vmin=vmin, vmax=vmax)
    ax.set_xticklabels(inputs)
    ax.figure.savefig(save_path)


# 定义模型输入
inputs = [6, 7, 0, 0, 1, 0, 0, 0, 0, 0]
X = torch.as_tensor(inputs.copy())
X = X.unsqueeze(0)
# 进行模型预测,并获取相应的预测结果
logits = runner.predict(X)
predict_label = torch.argmax(logits, dim=-1)
print(f"predict result: {predict_label.numpy()[0]}")

# 输入门
Is = runner.model.rnn_model.Is
plot_tensor(inputs, Is, save_path="./images/6.13_I.pdf")
# 遗忘门
Fs = runner.model.rnn_model.Fs
plot_tensor(inputs, Fs, save_path="./images/6.13_F.pdf")
# 输出门
Os = runner.model.rnn_model.Os
plot_tensor(inputs, Os, save_path="./images/6.13_O.pdf")
# 单元状态
Cs = runner.model.rnn_model.Cs
plot_tensor(inputs, Cs, save_path="./images/6.13_C.pdf", vmin=-5, vmax=5)

输出图像如下:

LSTM的记忆能力实验_第11张图片

LSTM的记忆能力实验_第12张图片 LSTM的记忆能力实验_第13张图片

 LSTM的记忆能力实验_第14张图片

最后放一下训练的总代码

LSTM.py

import os
import random
import torch
import numpy as np
from torch import nn
from torch.utils.data import DataLoader
from nndl import DigitSumDataset, load_data, Model_RNN4SeqClass, RunnerV3, Accuracy, plot_training_loss, LSTM
import matplotlib.pyplot as plt
import torch.nn.functional as F


def generate_data(length, k, save_path):
    if length < 3:
        raise ValueError("The length of data should be greater than 2.")
    if k == 0:
        raise ValueError("k should be greater than 0.")
    # 生成100条长度为length的数字序列,除前两个字符外,序列其余数字暂用0填充
    base_examples = []
    for n1 in range(0, 10):
        for n2 in range(0, 10):
            seq = [n1, n2] + [0] * (length - 2)
            label = n1 + n2
            base_examples.append((seq, label))

    examples = []
    # 数据增强:对base_examples中的每条数据,默认生成k条数据,放入examples
    for base_example in base_examples:
        for _ in range(k):
            # 随机生成替换的元素位置和元素
            idx = np.random.randint(2, length)
            val = np.random.randint(0, 10)
            # 对序列中的对应零元素进行替换
            seq = base_example[0].copy()
            label = base_example[1]
            seq[idx] = val
            examples.append((seq, label))

    # 保存增强后的数据
    with open(save_path, "w", encoding="utf-8") as f:
        for example in examples:
            # 将数据转为字符串类型,方便保存
            seq = [str(e) for e in example[0]]
            label = str(example[1])
            line = " ".join(seq) + "\t" + label + "\n"
            f.write(line)

    print(f"generate data to: {save_path}.")


# 定义生成的数字序列长度
lengths = [10, 15, 20, 25, 30, 35]
for length in lengths:
    # 生成长度为length的训练数据
    save_path = f"./datasets/{length}/train.txt"
    k = 3
    generate_data(length, k, save_path)
    # 生成长度为length的验证数据
    save_path = f"./datasets/{length}/dev.txt"
    k = 1
    generate_data(length, k, save_path)
    # 生成长度为length的测试数据
    save_path = f"./datasets/{length}/test.txt"
    k = 1
    generate_data(length, k, save_path)
# 训练轮次
num_epochs = 500
# 学习率
lr = 0.001
# 输入数字的类别数
num_digits = 10
# 将数字映射为向量的维度
input_size = 32
# 隐状态向量的维度
hidden_size = 32
# 预测数字的类别数
num_classes = 19
# 批大小
batch_size = 8
# 模型保存目录
save_dir = "./checkpoints"

# 可以设置不同的length进行不同长度数据的预测实验
def train(length):
    print(f"\n====> Training LSTM with data of length {length}.")
    np.random.seed(0)
    random.seed(0)

    # 加载长度为length的数据
    data_path = f"./datasets/{length}"
    train_examples, dev_examples, test_examples = load_data(data_path)
    train_set, dev_set, test_set = DigitSumDataset(train_examples), DigitSumDataset(dev_examples), DigitSumDataset(test_examples)
    train_loader = DataLoader(train_set, batch_size=batch_size)
    dev_loader = DataLoader(dev_set, batch_size=batch_size)
    test_loader = DataLoader(test_set, batch_size=batch_size)
    # 实例化模型
    base_model = LSTM(input_size, hidden_size)
    model = Model_RNN4SeqClass(base_model, num_digits, input_size, hidden_size, num_classes)
    # 指定优化器
    optimizer = torch.optim.Adam(lr=lr, params=model.parameters())
    # 定义评价指标
    metric = Accuracy()
    # 定义损失函数
    loss_fn = torch.nn.CrossEntropyLoss()
    # 基于以上组件,实例化Runner
    runner = RunnerV3(model, optimizer, loss_fn, metric)

    # 进行模型训练
    model_save_path = os.path.join(save_dir, f"best_lstm_model_{length}.pdparams")
    runner.train(train_loader, dev_loader, num_epochs=num_epochs, eval_steps=100, log_steps=100, save_path=model_save_path)

    return runner


lstm_runners = {}

lengths = [10, 15, 20, 25, 30, 35]
for length in lengths:
    runner = train(length)
    lstm_runners[length] = runner


# 画出训练过程中的损失图
for length in lengths:
    runner = lstm_runners[length]
    fig_name = f"./images/6.11_{length}.pdf"
    plot_training_loss(runner, fig_name, sample_step=100)

lstm_dev_scores = []
lstm_test_scores = []
for length in lengths:
    print(f"Evaluate LSTM with data length {length}.")
    runner = lstm_runners[length]
    # 加载训练过程中效果最好的模型
    model_path = os.path.join(save_dir, f"best_lstm_model_{length}.pdparams")
    runner.load_model(model_path)

    # 加载长度为length的数据
    data_path = f"./datasets/{length}"
    train_examples, dev_examples, test_examples = load_data(data_path)
    test_set = DigitSumDataset(test_examples)
    test_loader = DataLoader(test_set, batch_size=batch_size)

    # 使用测试集评价模型,获取测试集上的预测准确率
    score, _ = runner.evaluate(test_loader)
    lstm_test_scores.append(score)
    lstm_dev_scores.append(max(runner.dev_scores))

for length, dev_score, test_score in zip(lengths, lstm_dev_scores, lstm_test_scores):
    print(f"[LSTM] length:{length}, dev_score: {dev_score}, test_score: {test_score: .5f}")


plt.plot(lengths, lstm_dev_scores, '-o', color='#e8609b',  label="LSTM Dev Accuracy")
plt.plot(lengths, lstm_test_scores,'-o', color='#000000', label="LSTM Test Accuracy")

#绘制坐标轴和图例
plt.ylabel("accuracy", fontsize='large')
plt.xlabel("sequence length", fontsize='large')
plt.legend(loc='lower left', fontsize='x-large')

fig_name = "./images/6.12.pdf"
plt.savefig(fig_name)
plt.show()


# 声明LSTM和相关参数
class LSTM(nn.Module):
    def __init__(self, input_size, hidden_size, Wi_attr=None, Wf_attr=None, Wo_attr=None, Wc_attr=None,
                 Ui_attr=None, Uf_attr=None, Uo_attr=None, Uc_attr=None, bi_attr=None, bf_attr=None,
                 bo_attr=None, bc_attr=None):
        super(LSTM, self).__init__()
        self.input_size = input_size
        self.hidden_size = hidden_size

        # 初始化模型参数
        if Wi_attr == None:
            Wi = torch.zeros(size=[input_size, hidden_size], dtype=torch.float32)
        else:
            Wi = torch.tensor(Wi_attr, dtype=torch.float32)
        self.W_i = torch.nn.Parameter(Wi)

        if Wf_attr == None:
            Wf = torch.zeros(size=[input_size, hidden_size], dtype=torch.float32)
        else:
            Wf = torch.tensor(Wf_attr, dtype=torch.float32)
        self.W_f = torch.nn.Parameter(Wf)

        if Wo_attr == None:
            Wo = torch.zeros(size=[input_size, hidden_size], dtype=torch.float32)
        else:
            Wo = torch.tensor(Wo_attr, dtype=torch.float32)
        self.W_o = torch.nn.Parameter(Wo)

        if Wc_attr == None:
            Wc = torch.zeros(size=[input_size, hidden_size], dtype=torch.float32)
        else:
            Wc = torch.tensor(Wc_attr, dtype=torch.float32)
        self.W_c = torch.nn.Parameter(Wc)

        if Ui_attr == None:
            Ui = torch.zeros(size=[hidden_size, hidden_size], dtype=torch.float32)
        else:
            Ui = torch.tensor(Ui_attr, dtype=torch.float32)
        self.U_i = torch.nn.Parameter(Ui)
        if Uf_attr == None:
            Uf = torch.zeros(size=[hidden_size, hidden_size], dtype=torch.float32)
        else:
            Uf = torch.tensor(Uf_attr, dtype=torch.float32)
        self.U_f = torch.nn.Parameter(Uf)

        if Uo_attr == None:
            Uo = torch.zeros(size=[hidden_size, hidden_size], dtype=torch.float32)
        else:
            Uo = torch.tensor(Uo_attr, dtype=torch.float32)
        self.U_o = torch.nn.Parameter(Uo)

        if Uc_attr == None:
            Uc = torch.zeros(size=[hidden_size, hidden_size], dtype=torch.float32)
        else:
            Uc = torch.tensor(Uc_attr, dtype=torch.float32)
        self.U_c = torch.nn.Parameter(Uc)

        if bi_attr == None:
            bi = torch.zeros(size=[1, hidden_size], dtype=torch.float32)
        else:
            bi = torch.tensor(bi_attr, dtype=torch.float32)
        self.b_i = torch.nn.Parameter(bi)
        if bf_attr == None:
            bf = torch.zeros(size=[1, hidden_size], dtype=torch.float32)
        else:
            bf = torch.tensor(bf_attr, dtype=torch.float32)
        self.b_f = torch.nn.Parameter(bf)
        if bo_attr == None:
            bo = torch.zeros(size=[1, hidden_size], dtype=torch.float32)
        else:
            bo = torch.tensor(bo_attr, dtype=torch.float32)
        self.b_o = torch.nn.Parameter(bo)
        if bc_attr == None:
            bc = torch.zeros(size=[1, hidden_size], dtype=torch.float32)
        else:
            bc = torch.tensor(bc_attr, dtype=torch.float32)
        self.b_c = torch.nn.Parameter(bc)

    # 初始化状态向量和隐状态向量
    def init_state(self, batch_size):
        hidden_state = torch.zeros(size=[batch_size, self.hidden_size], dtype=torch.float32)
        cell_state = torch.zeros(size=[batch_size, self.hidden_size], dtype=torch.float32)
        return hidden_state, cell_state

    # 定义前向计算
    def forward(self, inputs, states=None):
        # inputs: 输入数据,其shape为batch_size x seq_len x input_size
        batch_size, seq_len, input_size = inputs.shape

        # 初始化起始的单元状态和隐状态向量,其shape为batch_size x hidden_size
        if states is None:
            states = self.init_state(batch_size)
        hidden_state, cell_state = states

        # 定义相应的门状态和单元状态向量列表
        self.Is = []
        self.Fs = []
        self.Os = []
        self.Cs = []
        # 初始化状态向量和隐状态向量
        cell_state = torch.zeros(size=[batch_size, self.hidden_size], dtype=torch.float32)
        hidden_state = torch.zeros(size=[batch_size, self.hidden_size], dtype=torch.float32)

        # 执行LSTM计算,包括:隐藏门、输入门、遗忘门、候选状态向量、状态向量和隐状态向量
        for step in range(seq_len):
            input_step = inputs[:, step, :]
            I_gate = F.sigmoid(torch.matmul(input_step, self.W_i) + torch.matmul(hidden_state, self.U_i) + self.b_i)
            F_gate = F.sigmoid(torch.matmul(input_step, self.W_f) + torch.matmul(hidden_state, self.U_f) + self.b_f)
            O_gate = F.sigmoid(torch.matmul(input_step, self.W_o) + torch.matmul(hidden_state, self.U_o) + self.b_o)
            C_tilde = F.tanh(torch.matmul(input_step, self.W_c) + torch.matmul(hidden_state, self.U_c) + self.b_c)
            cell_state = F_gate * cell_state + I_gate * C_tilde
            hidden_state = O_gate * F.tanh(cell_state)
            # 存储门状态向量和单元状态向量
            self.Is.append(I_gate.detach().numpy().copy())
            self.Fs.append(F_gate.detach().numpy().copy())
            self.Os.append(O_gate.detach().numpy().copy())
            self.Cs.append(cell_state.detach().numpy().copy())
        return hidden_state


# 实例化模型
base_model = LSTM(input_size, hidden_size)
model = Model_RNN4SeqClass(base_model, num_digits, input_size, hidden_size, num_classes)
# 指定优化器
optimizer = torch.optim.Adam(lr=lr, params=model.parameters())
# 定义评价指标
metric = Accuracy()
# 定义损失函数
loss_fn = torch.nn.CrossEntropyLoss()
# 基于以上组件,重新实例化Runner
runner = RunnerV3(model, optimizer, loss_fn, metric)

length = 10
# 加载训练过程中效果最好的模型
model_path = os.path.join(save_dir, f"best_lstm_model_{length}.pdparams")
runner.load_model(model_path)



import seaborn as sns
import matplotlib.pyplot as plt
def plot_tensor(inputs, tensor,  save_path, vmin=0, vmax=1):
    tensor = np.stack(tensor, axis=0)
    tensor = np.squeeze(tensor, 1).T

    plt.figure(figsize=(16,6))
    # vmin, vmax定义了色彩图的上下界
    ax = sns.heatmap(tensor, vmin=vmin, vmax=vmax)
    ax.set_xticklabels(inputs)
    ax.figure.savefig(save_path)


# 定义模型输入
inputs = [6, 7, 0, 0, 1, 0, 0, 0, 0, 0]
X = torch.as_tensor(inputs.copy())
X = X.unsqueeze(0)
# 进行模型预测,并获取相应的预测结果
logits = runner.predict(X)
predict_label = torch.argmax(logits, dim=-1)
print(f"predict result: {predict_label.numpy()[0]}")

# 输入门
Is = runner.model.rnn_model.Is
plot_tensor(inputs, Is, save_path="./images/6.13_I.pdf")
# 遗忘门
Fs = runner.model.rnn_model.Fs
plot_tensor(inputs, Fs, save_path="./images/6.13_F.pdf")
# 输出门
Os = runner.model.rnn_model.Os
plot_tensor(inputs, Os, save_path="./images/6.13_O.pdf")
# 单元状态
Cs = runner.model.rnn_model.Cs
plot_tensor(inputs, Cs, save_path="./images/6.13_C.pdf", vmin=-5, vmax=5)

 nndl.py

from torch.utils.data import Dataset
import torch
import torch.nn as nn
import os
import matplotlib.pyplot as plt
import torch.nn.functional as F


class DigitSumDataset(Dataset):
    def __init__(self, data):
        self.data = data

    def __getitem__(self, idx):
        example = self.data[idx]
        seq = torch.tensor(example[0], dtype=torch.int64)
        label = torch.tensor(example[1], dtype=torch.int64)
        return seq, label

    def __len__(self):
        return len(self.data)


# 加载数据
def load_data(data_path):
    # 加载训练集
    train_examples = []
    train_path = os.path.join(data_path, "train.txt")
    with open(train_path, "r", encoding="utf-8") as f:
        for line in f.readlines():
            # 解析一行数据,将其处理为数字序列seq和标签label
            items = line.strip().split("\t")
            seq = [int(i) for i in items[0].split(" ")]
            label = int(items[1])
            train_examples.append((seq, label))

    # 加载验证集
    dev_examples = []
    dev_path = os.path.join(data_path, "dev.txt")
    with open(dev_path, "r", encoding="utf-8") as f:
        for line in f.readlines():
            # 解析一行数据,将其处理为数字序列seq和标签label
            items = line.strip().split("\t")
            seq = [int(i) for i in items[0].split(" ")]
            label = int(items[1])
            dev_examples.append((seq, label))

    # 加载测试集
    test_examples = []
    test_path = os.path.join(data_path, "test.txt")
    with open(test_path, "r", encoding="utf-8") as f:
        for line in f.readlines():
            # 解析一行数据,将其处理为数字序列seq和标签label
            items = line.strip().split("\t")
            seq = [int(i) for i in items[0].split(" ")]
            label = int(items[1])
            test_examples.append((seq, label))

    return train_examples, dev_examples, test_examples


class Embedding(nn.Module):
    def __init__(self, num_embeddings, embedding_dim):
        super(Embedding, self).__init__()
        self.W = nn.init.xavier_uniform_(torch.empty(num_embeddings, embedding_dim),gain=1.0)

    def forward(self, inputs):
        # 根据索引获取对应词向量
        embs = self.W[inputs]
        return embs



# 基于RNN实现数字预测的模型
class Model_RNN4SeqClass(nn.Module):
    def __init__(self, model, num_digits, input_size, hidden_size, num_classes):
        super(Model_RNN4SeqClass, self).__init__()
        # 传入实例化的RNN层,例如SRN
        self.rnn_model = model
        # 词典大小
        self.num_digits = num_digits
        # 嵌入向量的维度
        self.input_size = input_size
        # 定义Embedding层
        self.embedding = Embedding(num_digits, input_size)
        # 定义线性层
        self.linear = nn.Linear(hidden_size, num_classes)

    def forward(self, inputs):
        # 将数字序列映射为相应向量
        inputs_emb = self.embedding(inputs)
        # 调用RNN模型
        hidden_state = self.rnn_model(inputs_emb)
        # 使用最后一个时刻的状态进行数字预测
        logits = self.linear(hidden_state)
        return logits


class RunnerV3(object):
    def __init__(self, model, optimizer, loss_fn, metric, **kwargs):
        self.model = model
        self.optimizer = optimizer
        self.loss_fn = loss_fn
        self.metric = metric  # 只用于计算评价指标

        # 记录训练过程中的评价指标变化情况
        self.dev_scores = []

        # 记录训练过程中的损失函数变化情况
        self.train_epoch_losses = []  # 一个epoch记录一次loss
        self.train_step_losses = []  # 一个step记录一次loss
        self.dev_losses = []

        # 记录全局最优指标
        self.best_score = 0

    def train(self, train_loader, dev_loader=None, **kwargs):
        # 将模型切换为训练模式
        self.model.train()

        # 传入训练轮数,如果没有传入值则默认为0
        num_epochs = kwargs.get("num_epochs", 0)
        # 传入log打印频率,如果没有传入值则默认为100
        log_steps = kwargs.get("log_steps", 100)
        # 评价频率
        eval_steps = kwargs.get("eval_steps", 0)

        # 传入模型保存路径,如果没有传入值则默认为"best_model.pdparams"
        save_path = kwargs.get("save_path", "best_model.pdparams")

        custom_print_log = kwargs.get("custom_print_log", None)

        # 训练总的步数
        num_training_steps = num_epochs * len(train_loader)

        if eval_steps:
            if self.metric is None:
                raise RuntimeError('Error: Metric can not be None!')
            if dev_loader is None:
                raise RuntimeError('Error: dev_loader can not be None!')

        # 运行的step数目
        global_step = 0

        # 进行num_epochs轮训练
        for epoch in range(num_epochs):
            # 用于统计训练集的损失
            total_loss = 0
            for step, data in enumerate(train_loader):
                X, y = data
                # 获取模型预测
                logits = self.model(X)
                loss = self.loss_fn(logits, y.long())  # 默认求mean
                total_loss += loss

                # 训练过程中,每个step的loss进行保存
                self.train_step_losses.append((global_step, loss.item()))

                if log_steps and global_step % log_steps == 0:
                    print(
                        f"[Train] epoch: {epoch}/{num_epochs}, step: {global_step}/{num_training_steps}, loss: {loss.item():.5f}")

                # 梯度反向传播,计算每个参数的梯度值
                loss.backward()

                if custom_print_log:
                    custom_print_log(self)

                # 小批量梯度下降进行参数更新
                self.optimizer.step()
                # 梯度归零
                self.optimizer.zero_grad()

                # 判断是否需要评价
                if eval_steps > 0 and global_step > 0 and \
                        (global_step % eval_steps == 0 or global_step == (num_training_steps - 1)):

                    dev_score, dev_loss = self.evaluate(dev_loader, global_step=global_step)
                    print(f"[Evaluate]  dev score: {dev_score:.5f}, dev loss: {dev_loss:.5f}")

                    # 将模型切换为训练模式
                    self.model.train()

                    # 如果当前指标为最优指标,保存该模型
                    if dev_score > self.best_score:
                        self.save_model(save_path)
                        print(
                            f"[Evaluate] best accuracy performence has been updated: {self.best_score:.5f} --> {dev_score:.5f}")
                        self.best_score = dev_score

                global_step += 1

            # 当前epoch 训练loss累计值
            trn_loss = (total_loss / len(train_loader)).item()
            # epoch粒度的训练loss保存
            self.train_epoch_losses.append(trn_loss)

        print("[Train] Training done!")

    # 模型评估阶段,使用'torch.no_grad()'控制不计算和存储梯度
    @torch.no_grad()
    def evaluate(self, dev_loader, **kwargs):
        assert self.metric is not None

        # 将模型设置为评估模式
        self.model.eval()

        global_step = kwargs.get("global_step", -1)

        # 用于统计训练集的损失
        total_loss = 0

        # 重置评价
        self.metric.reset()

        # 遍历验证集每个批次
        for batch_id, data in enumerate(dev_loader):
            X, y = data

            # 计算模型输出
            logits = self.model(X)

            # 计算损失函数
            loss = self.loss_fn(logits, y.long()).item()
            # 累积损失
            total_loss += loss

            # 累积评价
            self.metric.update(logits, y)

        dev_loss = (total_loss / len(dev_loader))
        dev_score = self.metric.accumulate()

        # 记录验证集loss
        if global_step != -1:
            self.dev_losses.append((global_step, dev_loss))
            self.dev_scores.append(dev_score)

        return dev_score, dev_loss

    # 模型评估阶段,使用'torch.no_grad()'控制不计算和存储梯度
    @torch.no_grad()
    def predict(self, x, **kwargs):
        # 将模型设置为评估模式
        self.model.eval()
        # 运行模型前向计算,得到预测值
        logits = self.model(x)
        return logits

    def save_model(self, save_path):
        torch.save(self.model.state_dict(), save_path)

    def load_model(self, model_path):
        state_dict = torch.load(model_path)
        self.model.load_state_dict(state_dict)


class Accuracy():
    def __init__(self, is_logist=True):
        # 用于统计正确的样本个数
        self.num_correct = 0
        # 用于统计样本的总数
        self.num_count = 0

        self.is_logist = is_logist

    def update(self, outputs, labels):

        # 判断是二分类任务还是多分类任务,shape[1]=1时为二分类任务,shape[1]>1时为多分类任务
        if outputs.shape[1] == 1:  # 二分类
            outputs = torch.squeeze(outputs, dim=-1)
            if self.is_logist:
                # logist判断是否大于0
                preds = torch.tensor((outputs >= 0), dtype=torch.float32)
            else:
                # 如果不是logist,判断每个概率值是否大于0.5,当大于0.5时,类别为1,否则类别为0
                preds = torch.tensor((outputs >= 0.5), dtype=torch.float32)
        else:
            # 多分类时,使用'torch.argmax'计算最大元素索引作为类别
            preds = torch.argmax(outputs, dim=1)

        # 获取本批数据中预测正确的样本个数
        labels = torch.squeeze(labels, dim=-1)
        batch_correct = torch.sum(torch.tensor(preds == labels, dtype=torch.float32)).cpu().numpy()
        batch_count = len(labels)

        # 更新num_correct 和 num_count
        self.num_correct += batch_correct
        self.num_count += batch_count

    def accumulate(self):
        # 使用累计的数据,计算总的指标
        if self.num_count == 0:
            return 0
        return self.num_correct / self.num_count

    def reset(self):
        # 重置正确的数目和总数
        self.num_correct = 0
        self.num_count = 0

    def name(self):
        return "Accuracy"


def plot_training_loss(runner, fig_name, sample_step):
    plt.figure()
    train_items = runner.train_step_losses[::sample_step]
    train_steps = [x[0] for x in train_items]
    train_losses = [x[1] for x in train_items]
    plt.plot(train_steps, train_losses, color='#e4007f', label="Train loss")

    dev_steps = [x[0] for x in runner.dev_losses]
    dev_losses = [x[1] for x in runner.dev_losses]
    plt.plot(dev_steps, dev_losses, color='#f19ec2', linestyle='--', label="Dev loss")

    # 绘制坐标轴和图例
    plt.ylabel("loss", fontsize='large')
    plt.xlabel("step", fontsize='large')
    plt.legend(loc='upper right', fontsize='x-large')

    plt.savefig(fig_name)
    plt.show()


class LSTM(nn.Module):
    def __init__(self, input_size, hidden_size, Wi_attr=None, Wf_attr=None, Wo_attr=None, Wc_attr=None,
                 Ui_attr=None, Uf_attr=None, Uo_attr=None, Uc_attr=None, bi_attr=None, bf_attr=None,
                 bo_attr=None, bc_attr=None):
        super(LSTM, self).__init__()
        self.input_size = input_size
        self.hidden_size = hidden_size

        # 初始化模型参数
        if Wi_attr is None:
            Wi = torch.zeros(size=[input_size, hidden_size], dtype=torch.float32)
        else:
            Wi = torch.tensor(Wi_attr, dtype=torch.float32)
        self.W_i = torch.nn.Parameter(Wi)

        if Wf_attr is None:
            Wf = torch.zeros(size=[input_size, hidden_size], dtype=torch.float32)
        else:
            Wf = torch.tensor(Wf_attr, dtype=torch.float32)
        self.W_f = torch.nn.Parameter(Wf)

        if Wo_attr is None:
            Wo = torch.zeros(size=[input_size, hidden_size], dtype=torch.float32)
        else:
            Wo = torch.tensor(Wo_attr, dtype=torch.float32)
        self.W_o = torch.nn.Parameter(Wo)

        if Wc_attr is None:
            Wc = torch.zeros(size=[input_size, hidden_size], dtype=torch.float32)
        else:
            Wc = torch.tensor(Wc_attr, dtype=torch.float32)
        self.W_c = torch.nn.Parameter(Wc)

        if Ui_attr is None:
            Ui = torch.zeros(size=[hidden_size, hidden_size], dtype=torch.float32)
        else:
            Ui = torch.tensor(Ui_attr, dtype=torch.float32)
        self.U_i = torch.nn.Parameter(Ui)

        if Uf_attr is None:
            Uf = torch.zeros(size=[hidden_size, hidden_size], dtype=torch.float32)
        else:
            Uf = torch.tensor(Uf_attr, dtype=torch.float32)
        self.U_f = torch.nn.Parameter(Uf)

        if Uo_attr is None:
            Uo = torch.zeros(size=[hidden_size, hidden_size], dtype=torch.float32)
        else:
            Uo = torch.tensor(Uo_attr, dtype=torch.float32)
        self.U_o = torch.nn.Parameter(Uo)

        if Uc_attr is None:
            Uc = torch.zeros(size=[hidden_size, hidden_size], dtype=torch.float32)
        else:
            Uc = torch.tensor(Uc_attr, dtype=torch.float32)
        self.U_c = torch.nn.Parameter(Uc)

        if bi_attr is None:
            bi = torch.zeros(size=[1, hidden_size], dtype=torch.float32)
        else:
            bi = torch.tensor(bi_attr, dtype=torch.float32)
        self.b_i = torch.nn.Parameter(bi)

        if bf_attr is None:
            bf = torch.zeros(size=[1, hidden_size], dtype=torch.float32)
        else:
            bf = torch.tensor(bf_attr, dtype=torch.float32)
        self.b_f = torch.nn.Parameter(bf)

        if bo_attr is None:
            bo = torch.zeros(size=[1, hidden_size], dtype=torch.float32)
        else:
            bo = torch.tensor(bo_attr, dtype=torch.float32)
        self.b_o = torch.nn.Parameter(bo)

        if bc_attr is None:
            bc = torch.zeros(size=[1, hidden_size], dtype=torch.float32)
        else:
            bc = torch.tensor(bc_attr, dtype=torch.float32)
        self.b_c = torch.nn.Parameter(bc)

    # 初始化状态向量和隐状态向量
    def init_state(self, batch_size):
        hidden_state = torch.zeros(size=[batch_size, self.hidden_size], dtype=torch.float32)
        cell_state = torch.zeros(size=[batch_size, self.hidden_size], dtype=torch.float32)
        return hidden_state, cell_state

    # 定义前向计算
    def forward(self, inputs, states=None):
        # inputs: 输入数据,其shape为batch_size x seq_len x input_size
        batch_size, seq_len, input_size = inputs.shape

        # 初始化起始的单元状态和隐状态向量,其shape为batch_size x hidden_size
        if states is None:
            states = self.init_state(batch_size)
        hidden_state, cell_state = states

        # 执行LSTM计算,包括:输入门、遗忘门和输出门、候选内部状态、内部状态和隐状态向量
        for step in range(seq_len):
            # 获取当前时刻的输入数据step_input: 其shape为batch_size x input_size
            step_input = inputs[:, step, :]
            # 计算输入门, 遗忘门和输出门, 其shape为:batch_size x hidden_size
            I_gate = F.sigmoid(torch.matmul(step_input, self.W_i) + torch.matmul(hidden_state, self.U_i) + self.b_i)
            F_gate = F.sigmoid(torch.matmul(step_input, self.W_f) + torch.matmul(hidden_state, self.U_f) + self.b_f)
            O_gate = F.sigmoid(torch.matmul(step_input, self.W_o) + torch.matmul(hidden_state, self.U_o) + self.b_o)
            # 计算候选状态向量, 其shape为:batch_size x hidden_size
            C_tilde = F.tanh(torch.matmul(step_input, self.W_c) + torch.matmul(hidden_state, self.U_c) + self.b_c)
            # 计算单元状态向量, 其shape为:batch_size x hidden_size
            cell_state = F_gate * cell_state + I_gate * C_tilde
            # 计算隐状态向量,其shape为:batch_size x hidden_size
            hidden_state = O_gate * F.tanh(cell_state)

        return hidden_state

        总结

        LSTM模型的长程依赖能力,通过实验我们也明显发现了这一问题,因为传统的RNN在处理长序列时会面临梯度消失或梯度爆炸的问题,这导致了难以捕捉到长距离之间的依赖关系。LSTM通过引入了门控机制来解决这个问题。它包含了一个遗忘门、输入门和输出门,通过这些门控制信息是否进入和流出细胞状态。这些门根据输入数据和上一个时间步的隐藏状态来动态地决定信息的流动。遗忘门允许模型选择性地遗忘之前的记忆,输入门允许模型选择性地更新记忆,而输出门允许模型选择性地输出记忆。这种门控机制使得LSTM能够在长序列中有效地保存和利用关键的历史信息,从而更好地捕捉到长程依赖关系。

        并且通过资料我发现此外,LSTM还通过使用恒定误差传播(Constant Error Carousel)来减轻梯度消失和梯度爆炸的问题。在反向传播过程中,误差可以在LSTM单元内部循环多次,从而保持了梯度的稳定性。

        本实验写完的有点久了,但是一直没法,细节忘得有点多,总体带来的难度没有下一个实验高,但是通过这个实验手搓LSTM算子,相信对LSTM都会有更深的理解。

(PS:代码运行时间大概需要30min左右,期末事情多,没有太多的空闲时间,如果想跑一跑这个代码的同学可以等1.10左右我会对这段时间所有的博客全部更新一遍包括错字,优化等等,到时候再体验)

你可能感兴趣的:(DL模型,lstm,人工智能,rnn)