Pytorch实现语言模型

文章目录

    • 0.前言
    • 1.实现RNN语言模型
      • 1.1 数据预处理
      • 1.2 模型构建
      • 1.3 模型训练和评价
    • 2.总结

0.前言

说到语言模型,可能会想到n-gram这一经典的统计语言模型。但是究竟什么是语言模型呢?简单一句话来说,就是一句话出现的可能性。关于这个的理解可以参考这里
本次实现当然是使用神经网络语言模型(NNLM),其本质还是一个n元语言模型,输出词只与前n个词有关,如果想要学习更多的历史词,需要使用RNN。这里我们同时实现RNN的两个改进模型,LSTM和GRU。

1.实现RNN语言模型

模型很简单,就是通过输入的词向量,训练一个RNN来预测下一个单词。具体步骤如下:

1.1 数据预处理

使用torchtext构建词汇表、stoi和itos索引。

import torchtext
import numpy as np
import torch
import torch.nn as nn

from torchtext.vocab import Vectors

USE_CUDA = torch.cuda.is_available()

BATCH_SIZE = 32
EMBEDDING_SIZE = 650
MAX_VOCAB_SIZE = 50000
#构建词表
TEXT = torchtext.data.Field(lower=True)
train, val, test = torchtext.datasets.LanguageModelingDataset.splits(path=".",
    train="text8.train.txt", validation="text8.dev.txt", test="text8.test.txt",
    text_field=TEXT)
TEXT.build_vocab(train, max_size=MAX_VOCAB_SIZE)
print("vocabulary size:{}".format(len(TEXT.vocab)))
#构建train、val、test的迭代器(list)
VOCAB_SIZE = len(TEXT.vocab)
train_iter, val_iter, test_iter = torchtext.data.BPTTIterator.splits(
    (train, val, test), batch_size=BATCH_SIZE, device=-1, bptt_len=32, 
    repeat=False, shuffle=True)
#打印出train的例子
it = iter(train_iter)
batch = next(it) #(bptt_len, batch_size)
print(" ".join([TEXT.vocab.itos[i] for i in batch.text[:,1].data]))
print(" ".join([TEXT.vocab.itos[i] for i in batch.target[:,1].data]))

至此,数据预处理完毕。

参考资料:

  1. torchtext官方文档

1.2 模型构建

使用torch.nn的预定义好的模型:Linear、RNN、LSTM、GRU,来构建一个语言模型。

class RNNModel(nn.Module):
    def __init__(self, rnn_type, ntoken, ninp, nhid, nlayers, dropout=0.5):
        '''词嵌入层、循环神经网络层(RNN,LSTM,GRU)、线性层、dropout层'''
        super(RNNModel, self).__init__()
        self.drop = nn.Dropout(dropout)
        self.encoder = nn.Embedding(ntoken, ninp)
        if rnn_type in ['LSTM', 'GRU']:
            self.rnn = getattr(nn, rnn_type)(ninp, nhid, nlayers, dropout=dropout)
        else:
            try:
                nonlinearity = {'RNN_TANH':'tanh', 'RNN_RELU':'relu'}[rnn_type]
            except KeyError:
                raise ValueError("""An invalid option for --model was supllied, options are ['LSTM','GRU','RNN_TANH' or 'RNN_RELU']""")
            self.rnn = nn.RNN(ninp, nhid, nlayers, nonlinearity=nonlilnearity, dropout=dropout)
        self.decoder = nn.Linear(nhid, ntoken)
        
        self.init_weights()
        
        self.rnn_type = rnn_type
        self.nhid = nhid
        self.nlayers = nlayers
    #更新我们的encoder(嵌入层)和、decoder层(输出层)
    def init_weights(self):
        initrange = 0.1
        self.encoder.weight.data.uniform_(-initrange, initrange)
        self.decoder.bias.data.zero_()
        self.decoder.weight.data.uniform_(-initrange, initrange)
    
    def forward(self, input, hidden):
        emb = self.drop(self.encoder(input))
        output, hidden = self.rnn(emb, hidden)
        output = self.drop(output)#(seq_len, batch, hidden_size)
        decoded = self.decoder(output.view(output.size(0)*output.size(1), output.size(2))) #(seq_len*batch, token_size)
        return decoded.view(output.size(0), output.size(1), decoded.size(1)), hidden
    def init_hidden(self, bsz, requires_grad=True):
        weight = next(self.parameters())
        if self.rnn_type == 'LSTM':
            return (weight.new_zeros((self.nlayers, bsz, self.nhid), requires_grad=requires_grad),
                   weight.new_zeros((self.nlayers, bsz, self.nhid), requires_grad=requires_grad))
        else:
            return weight.new_zeros((self.nlayers, bsz, self.nhid), requires_grad=requires_grad)

1.3 模型训练和评价

构建一个optimizer,实例化一个模型,计算损失函数之后,进行反向传播计算梯度,然后使用optimizer来优化模型。

#将tensor从历史计算图中分离出来
def repackage_hidden(h):
    if isinstance(h, torch.Tensor):
        return h.detach()
    else:
        return tuple(repackage_hidden(v) for v in h)
#计算参数数目(71823002)
def count_para(model):
    return sum(p.numel() for p in model.parameters() if p.requires_grad)
count_para(model)
#返回loss
def evaluate(model, data):
    #步骤:通过data获取批量的text,target数据,调用模型,获取ouput,hidden,计算loss,不需要backward。
    model.eval()
    total_loss = 0.
    it = iter(data)
    total_count = 0.
    with torch.no_grad():
        hidden = model.init_hidden(BATCH_SIZE, requires_grad=True)
        for i, batch in enumerate(it):
            data, target = batch.text, batch.target
            if USE_CUDA:
                data, target = data.cuda(), target.cuda()
            hidden = repackage_hidden(hidden)
            with torch.no_grad():
                output, hidden = model(data, hidden)
            loss = loss_fn(output.view(-1, VOCAB_SIZE), target.view(-1))
            total_count += np.multiply(*data.size())
            total_loss += loss.item()*np.multiply(*data.size())
        loss = total_loss / total_count
        model.train()
        return loss
#模型训练
model = RNNModel("LSTM", VOCAB_SIZE, EMBEDDING_SIZE, EMBEDDING_SIZE, 2, dropout=0.5)
if USE_CUDA:
    model = model.cuda()
loss_fn = nn.CrossEntropyLoss()
learning_rate = 0.001
optimizer = torch.optim.Adam(model.parameters(), lr = learning_rate)
scheduler = torch.optim.lr_scheduler.ExponentialLR(optimizer, 0.5)

GRAD_CLIP = 1.
NUM_EPOCHS = 2
val_losses = []
for epoch in range(NUM_EPOCHS):
    model.train()
    it = iter(train_iter)#(32,32)
    hidden = model.init_hidden(BATCH_SIZE)
    for i, batch in enumerate(it):
        data, target = batch.text, batch.target
        if USE_CUDA:
            data, target = data.cuda(), target.cuda()
        hidden = repackage_hidden(hidden)#与计算图之前的hidden分离
        model.zero_grad()
        output, hidden = model(data, hidden)
        loss = loss_fn(output.view(-1, VOCAB_SIZE), target.view(-1))
        loss.backward()
        torch.nn.utils.clip_grad_norm_(model.parameters(), GRAD_CLIP) #CLIP,防止梯度爆炸
        optimizer.step()
        if i % 1000 == 0:
            print("epoch", epoch, "iter", i, "loss", loss.item())
        if i % 10000 == 0 and i != 0:
            val_loss = evaluate(model, val_iter)
            if len(val_losses) == 0 or val_loss < min(val_losses):
                print("best model, val loss:", val_loss)
                torch.save(model.state_dict(), "lm-best.th")
            else:
                scheduler.step()#更新lr
                optimizer = torch.optim.Adam(model.parameters(), lr=learning_rate)#更新optimizer中的lr
            val_losses.append(val_loss)

2.总结

还是一样的套路,数据预处理,模型构建, 训练测试。这里用到了torchtext,封装好的NLP文本预处理库,不用自己构建DataLoader了。其他的都差不多,还需要多看官方教程和源码进行深入的理解。

你可能感兴趣的:(炼丹笔记)