NLP中 丰富的多义现象 和 复杂的语义 → \rightarrow →同一词可根据上下文赋予不同的表示
流行的上下文敏感表示:TagLM(language-model augmented sequence tagger)、CoVe(Context Vectors)、ELMo(Embeddings from Language Models)
ELMo简洁介绍:
ELMo优点:
ELMo(双向编码特性) + GPT(任务无关,但是 AR特性)
Bert使用trans编码器
Bert与GPT两点类似:
Bert改进了11种NLP任务的水平,有以下4个大类:
2018年:ELMo → \rightarrow → GPT、Bert
概念上简单经验上强大的 自然语言深度表示 已经统治了 各种NLP任务的解决方案。
本章:深入了解Bert的训练前准备
下章:针对下游应用的Bert微调
import torch
from torch import nn
from d2l import torch as d2l
D:\ana3\envs\nlp_prac\lib\site-packages\numpy\_distributor_init.py:32: UserWarning: loaded more than 1 DLL from .libs:
D:\ana3\envs\nlp_prac\lib\site-packages\numpy\.libs\libopenblas.IPBC74C7KURV7CB2PKT5Z5FNR3SIBV4J.gfortran-win_amd64.dll
D:\ana3\envs\nlp_prac\lib\site-packages\numpy\.libs\libopenblas.XWYDX2IKJW2NMTWSFYNGFUWKQU3LYTCZ.gfortran-win_amd64.dll
stacklevel=1)
、文本序列标记、特殊分隔词元
的连结
、第一个文本序列的标记、
、第二个文本序列标记、
# 输入句子(对),返回 Bert输入序列的标记 及其 相应的片段索引
def get_tokens_and_segments(tokens_a, tokens_b=None):
"""获取输入序列的词元 及其 片段索引"""
tokens = ['' ] + tokens_a + ['' ]
# 0 和 1 分别标记片段 A 和 B
segments = [0] * (len(tokens_a) + 2)
if tokens_b is not None:
tokens += tokens_b + ['' ]
segments += [1] * (len(tokens_b) + 1)
return tokens, segments
# 与原始trans编码器不同,Bert使用 片段嵌入 和 可学习的位置嵌入。
class BERTEncoder(nn.Module):
"""Bert编码器"""
def __init__(self, vocab_size, num_hiddens, norm_shape, ffn_num_input,
ffn_num_hiddens, num_heads, num_layers, dropout,
max_len=1000, key_size=768, query_size=768, value_size=768,
**kwargs):
super(BERTEncoder, self).__init__(**kwargs)
self.token_embedding = nn.Embedding(vocab_size, num_hiddens)
self.segment_embedding = nn.Embedding(2, num_hiddens)
self.blks = nn.Sequential()
# 构造多个块
for i in range(num_layers):
self.blks.add_module(f'{i}', d2l.EncoderBlock(
key_size, query_size, value_size, num_hiddens, norm_shape,
ffn_num_input, ffn_num_hiddens, num_heads, dropout, True))
# Bert中,位置嵌入是 可学习的,因此我们创建一个 足够长的位置嵌入参数
self.pos_embedding = nn.Parameter(torch.randn(1, max_len, num_hiddens))
def forward(self, tokens, segments, valid_lens):
# 在下面代码中,x形状不变:(批量大小,最大序列长度,num_hiddens)
x = self.token_embedding(tokens) + self.segment_embedding(segments)
x = x + self.pos_embedding.data[:, :x.shape[1], :]
for blk in self.blks:
x = blk(x, valid_lens)
return x
# 演示BertEncoder的前向推断,创建一个实例并初始化 其参数
vocab_size, num_hiddens, ffn_num_hiddens, num_heads = 1000, 768, 1024, 4
norm_shape, ffn_num_input, num_layers, dropout = [768], 768, 2, 0.2
encoder = BERTEncoder(vocab_size, num_hiddens, norm_shape, ffn_num_input,
ffn_num_hiddens, num_heads, num_layers, dropout)
# 定义tokens: 长度为8的2个输入序列,每个词元是 词表的索引
# 使用输入tokens的BERTEncoder的前向推断 返回 编码结果
# 每个词元 由 向量表示,
# 向量长度由超参数 num_hiddens(trans编码器的隐藏大小,即隐藏单元数) 定义
tokens = torch.randint(0, vocab_size, (2, 8))
segments = torch.tensor([[0, 0, 0, 0, 1, 1, 1, 1],[0, 0, 0, 1, 1, 1, 1, 1]])
encoded_x = encoder(tokens, segments, None)
encoded_x.shape
# torch.Size([2, 8, 768])
# 输出解释:2句话、每句8个词,每个词维度是768
torch.Size([2, 8, 768])
及
的Bert表示
不会出现在微调中,为了避免 预训练 和 微调 的这种不匹配,如果为预测而掩蔽 某词元,则在输入中将其替换为:
词元class MaskLM(nn.Module):
"""Bert的maskedLM"""
def __init__(self, vocab_size, num_hiddens, num_inputs=768, **kwargs):
super(MaskLM, self).__init__(**kwargs)
self.mlp = nn.Sequential(nn.Linear(num_inputs, num_hiddens),
nn.ReLU(),
nn.LayerNorm(num_hiddens),
nn.Linear(num_hiddens, vocab_size))
def forward(self, x, pred_positions):
num_pred_positions = pred_positions.shape[1]
pred_positions = pred_positions.reshape(-1)
batch_size = x.shape[0]
batch_idx = torch.arange(0, batch_size)
# 假设 batch_size = 2, num_pred_positions=3
# 那么 batch_idx 是 np.array([0, 0, 0, 1, 1])
batch_idx = torch.repeat_interleave(batch_idx, num_pred_positions)
masked_x = x[batch_idx, pred_positions]
masked_x = masked_x.reshape((batch_size, num_pred_positions, -1))
mlm_y_hat = self.mlp(masked_x)
return mlm_y_hat
mlm = MaskLM(vocab_size, num_hiddens) # 初始化该类
mlm_positions = torch.tensor([[1, 5, 2], [6, 1, 5]]) # 序列中需进行预测的三个索引
mlm_y_hat = mlm(encoded_x, mlm_positions) # 两个bert输入序列,预测索引
mlm_y_hat.shape
torch.Size([2, 3, 1000])
# 计算交叉熵损失
mlm_y = torch.tensor([[7,8,9],[10,20,30]]) # 真实标签
loss = nn.CrossEntropyLoss(reduction='none')
mlm_l= loss(mlm_y_hat.reshape((-1, vocab_size)), mlm_y.reshape(-1))
mlm_l.shape
torch.Size([6])
mlm_l
tensor([7.3066, 6.4002, 6.6777, 7.2173, 7.4744, 6.8937],
grad_fn=)
的Bert表示 已经对 输入的两个句子 进行了编码。
词元。(编码后的
词元 → \rightarrow →MLP 隐 藏 层 \color{#FF1000}{隐藏层} 隐藏层 → \rightarrow →x → \rightarrow → 输 出 层 \color{#FF1000}{输出层} 输出层 → \rightarrow →预测结果)class NextSentencePred(nn.Module):
"""Bert的下一句预测任务(NSP)"""
def __init__(self, num_inputs, **kwargs):
super(NextSentencePred, self).__init__(**kwargs)
self.output = nn.Linear(num_inputs, 2)
def forward(self, x):
# x: (batch_size, num_hiddens)
return self.output(x)
# 测试
encoded_x = torch.flatten(encoded_x, start_dim=1)
# print(encoded_x, encoded_x.shape)
# NSP的输入形状:(batch_size, num_hiddens)
nsp = NextSentencePred(encoded_x.shape[-1])
nsp_y_hat = nsp(encoded_x)
nsp_y_hat, nsp_y_hat.shape
(tensor([[-0.3838, -0.2278],
[ 0.2239, 0.2473]], grad_fn=),
torch.Size([2, 2]))
# 计算 两个二元分类 的交叉熵损失
nsp_y = torch.tensor([0, 1])
nsp_l = loss(nsp_y_hat, nsp_y)
nsp_l, nsp_l.shape
(tensor([0.7742, 0.6815], grad_fn=), torch.Size([2]))
class BERTModel(nn.Module):
"""BERT模型"""
def __init__(self, vocab_size, num_hiddens, norm_shape, ffn_num_input,
ffn_num_hiddens, num_heads, num_layers, dropout,
max_len=1000, key_size=768, query_size=768, value_size=768,
hid_in_features=768, mlm_in_features=768,
nsp_in_features=768):
super(BERTModel, self).__init__()
self.encoder = BERTEncoder(vocab_size, num_hiddens, norm_shape,
ffn_num_input, ffn_num_hiddens, num_heads, num_layers,
dropout, max_len=max_len, key_size=key_size,
query_size=query_size, value_size=value_size)
self.hidden = nn.Sequential(nn.Linear(hid_in_features, num_hiddens),
nn.Tanh())
self.mlm = MaskLM(vocab_size, num_hiddens, mlm_in_features) # 任务1
self.nsp = NextSentencePred(nsp_in_features) # 任务2
def forward(self, tokens, segments, valid_lens=None):
encoded_x = self.encoder(tokens, segments, valid_lens)
if pred_positions is not None:
mlm_y_hat = self.mlm(encoded_x, pred_positions) # 任务1
else:
mlm_y_hat = None
# 用于NSP的MLP分类器的隐藏层, 0是“”标记的索引
nsp_y_hat = self.nsp(self.hidden(encoded_x[:, 0, :])) # 任务2
return encoded_x, mlm_y_hat, nsp_y_hat
import os, random, torch
from d2l import torch as d2l
# 下载数据
d2l.DATA_HUB['wikitext-2'] = (
'https://s3.amazonaws.com/research.metamind.io/wikitext/'
'wikitext-2-v1.zip', '3c914d17d80b1459be871a5039ac23e752a53cbe')
def _read_wiki(data_dir):
file_name = os.path.join(data_dir, 'wiki.train.tokens')
with open(file_name, 'r', encoding='utf-8') as f:
lines = f.readlines()
# 大写字母转换为小写字母
paragraphs = [line.strip().lower().split(' . ')
for line in lines if len(line.split(' . ')) >= 2]
random.shuffle(paragraphs)
return paragraphs
def _get_next_sentence(sentence, next_sentence, paragraphs):
# 50% 的概率是下一句话
if random.random() < 0.5:
is_next = True
else:
# paragraphs 是三重列表的嵌套
next_sentence = random.choice(random.choice(paragraphs)) # 随机选择下一句话
is_next = False
return sentence, next_sentence, is_next
# 句子列表,每个句子是词元列表 bert输入序列的最大长度
def _get_nsp_data_from_paragraph(paragraph, paragraphs, vocab, max_len):
nsp_data_from_paragraph = []
for i in range(len(paragraph) - 1):
tokens_a, tokens_b, is_next = _get_next_sentence( # 确定这一对是不是正样本
paragraph[i], paragraph[i + 1], paragraphs)
# 考虑一个``词元和两个``词元
if len(tokens_a) + len(tokens_b) + 3 > max_len: # 不使用超过最大长度的样本
continue
tokens, segments = d2l.get_tokens_and_segments(tokens_a, tokens_b)
nsp_data_from_paragraph.append((tokens, segments, is_next)) # 加入最终的数据集中
return nsp_data_from_paragraph
def _replace_mlm_tokens(tokens, candidate_pred_positions, num_mlm_preds, vocab):
# 为 masked-LM 的输入创建 新的词元副本,其中输入可能包含 替换的``或随机词元
mlm_input_tokens = [token for token in tokens]
pred_positions_and_labels = []
# 打乱后用于在 masked-LM 任务中获取 15%的随机词元 进行预测
random.shuffle(candidate_pred_positions)
for mlm_pred_position in candidate_pred_positions:
if len(pred_positions_and_labels) >= num_mlm_preds:
break
masked_token = None
# 80%时间:将词替换为``词元
if random.random() < 0.8:
masked_token = ""
else:
# 10%的时间:保持词不变
if random.random() < 0.5:
masked_token = tokens[mlm_pred_position]
# 10%的时间:用 随机词 替换该词
else:
masked_token = random.choice(vocab.idx_to_token)
mlm_input_tokens[mlm_pred_position] = masked_token
pred_positions_and_labels.append(
(mlm_pred_position, tokens[mlm_pred_position]))
return mlm_input_tokens, pred_positions_and_labels
# 带掩码的输入词元、 (发生预测的词元索引、要预测的词)
def _get_mlm_data_from_tokens(tokens, vocab):
candidate_pred_positions = []
# tokens 是 一个字符串列表
for i, token in enumerate(tokens):
# 在 masked-LM任务 中不会预测特殊词元
if token in ['' , '' ]:
continue
candidate_pred_positions.append(i)
# masked-LM 任务中预测 15%的随机词元
num_mlm_preds = max(1, round(len(tokens) * 0.15))
mlm_input_tokens, pred_position_and_labels = _replace_mlm_tokens(
tokens, candidate_pred_positions, num_mlm_preds, vocab)
pred_position_and_labels = sorted(pred_position_and_labels,
key=lambda x: x[0])
pred_positions = [v[0] for v in pred_position_and_labels]
mlm_pred_labels = [v[1] for v in pred_position_and_labels]
return vocab[mlm_input_tokens], pred_positions, vocab[mlm_pred_labels]
附加到输入def _pad_bert_inputs(examples, max_len, vocab):
max_num_mlm_preds = round(max_len * 0.15) # 最大预测词的数量
all_token_ids, all_segments, valid_lens = [], [], []
all_pre_positions, all_mlm_weights, all_mlm_labels = [], [], []
nsp_labels = []
for (token_ids, pred_positions, mlm_pred_label_ids, segments, is_next) in examples:
all_token_ids.append(torch.tensor(token_ids + [vocab['' ]] * (
max_len - len(token_ids)), dtype=torch.long))
all_segments.append(torch.tensor(segments + [0] *(
max_len - len(segments)), dtype=torch.long))
# valid_lens 不包括 的计数
valid_lens.append(torch.tensor(len(token_ids), dtype=torch.float32))
all_pre_positions.append(torch.tensor(pred_positions + [0] * (
max_num_mlm_preds - len(pred_positions)), dtype=torch.long))
# 填充 词元的预测 将通过乘以 0 权重
all_mlm_weights.append(
torch.tensor([1.0] * len(mlm_pred_label_ids) + [0.0] * (
max_num_mlm_preds - len(pred_positions)),
dtype=torch.float32))
all_mlm_labels.append(torch.tensor(mlm_pred_label_ids + [0] * (
max_num_mlm_preds - len(mlm_pred_label_ids)), dtype=torch.long))
nsp_labels.append(torch.tensor(is_next, dtype=torch.long))
return (all_token_ids, all_segments, valid_lens, all_pre_positions,
all_mlm_weights, all_mlm_labels, nsp_labels)
__getitem__
函数:任意访问 WikiText-2语料库的一对句子生成的预训练样本(masked-LM和NSP)class _WikiTextDataset(torch.utils.data.Dataset):
def __init__(self, paragraphs, max_len):
# 输入paragraphs[i] 是代表段落的 句子字符串列表
# 而输出paragraphs[i] 是代表 段落的句子列表,其中每个句子都是 词元列表
paragraphs = [d2l.tokenize(
paragraph, token='word') for paragraph in paragraphs]
sentences = [sentence for paragraph in paragraphs
for sentence in paragraph]
self.vocab = d2l.Vocab(sentences, min_freq=5, reserved_tokens=[
'' , '' , '' , ''
])
# 获取 NSP的数据
examples = []
for paragraph in paragraphs:
examples.extend(_get_nsp_data_from_paragraph(
paragraph, paragraphs, self.vocab, max_len))
# 获取 masked_LM 的数据
examples = [(_get_mlm_data_from_tokens(tokens, self.vocab)
+ (segments, is_next))
for tokens, segments, is_next in examples]
# 填充输入
(self.all_token_ids, self.all_segments, self.valid_lens,
self.all_pred_positions, self.all_mlm_weights,
self.all_mlm_labels, self.nsp_labels) = _pad_bert_inputs(
examples, max_len, self.vocab)
def __getitem__(self, idx):
return (self.all_token_ids[idx], self.all_segments[idx],
self.valid_lens[idx], self.all_pred_positions[idx],
self.all_mlm_weights[idx], self.all_mlm_labels[idx],
self.nsp_labels[idx])
def __len__(self):
return len(self.all_token_ids)
def load_data_wiki(batch_size, max_len):
"""加载WikiText-2数据集"""
num_workers = d2l.get_dataloader_workers() # 返回4
num_workers = 0
data_dir = d2l.download_extract('wikitext-2', 'wikitext-2')
paragraphs = _read_wiki(data_dir)
train_set = _WikiTextDataset(paragraphs, max_len)
train_iter = torch.utils.data.DataLoader(train_set, batch_size,
shuffle=True, num_workers=num_workers)
return train_iter, train_set.vocab
# 打印出小批量的 bert预训练样本 的形状
# 批量大小:512,;bert输入序列的最大长度为:64
batch_size, max_len = 512, 64
train_iter, vocab = load_data_wiki(batch_size, max_len)
for (tokens_x, segments_x, valid_lens_x, pred_positions_x, mlm_weights_x,
mlm_y, nsp_y) in train_iter:
print(tokens_x.shape, segments_x.shape, valid_lens_x.shape, pred_positions_x.shape,
mlm_weights_x.shape, mlm_y.shape, nsp_y.shape)
break
torch.Size([512, 64]) torch.Size([512, 64]) torch.Size([512]) torch.Size([512, 10]) torch.Size([512, 10]) torch.Size([512, 10]) torch.Size([512])
len(vocab), vocab.idx_to_token[10045]
(20256, 'hospitallers')
import torch, os, random
from torch import nn
from d2l import torch as d2l
batch_size, max_len = 512, 64
train_iter, vocab = load_data_wiki(batch_size, max_len)
net = d2l.BERTModel(len(vocab), num_hiddens=128, norm_shape=[128],
ffn_num_input=128, ffn_num_hiddens=256, num_heads=2,
num_layers=2, dropout=0.2, key_size=128, query_size=128,
value_size=128, hid_in_features=128, mlm_in_features=128,
nsp_in_features=128)
devices = d2l.try_all_gpus()
loss = nn.CrossEntropyLoss()
# 训练前,定义一个辅助函数计算(给定训练样本)两个预测任务的损失。
# 注意:BERT预训练的最终损失是 两个任务损失的和
def _get_batch_loss_bert(net, loss, vocab_size, tokens_x, segments_x, valid_lens_x,
pred_positions_x, mlm_weights_x, mlm_y, nsp_y):
# 前向传播
_, mlm_y_hat, nsp_y_hat = net(tokens_x, segments_x, valid_lens_x.reshape(-1),
pred_positions_x)
# 计算masked-LM的损失
mlm_l = loss(mlm_y_hat.reshape(-1, vocab_size), mlm_y.reshape(-1)) * \
mlm_weights_x.reshape(-1, 1)
mlm_l = mlm_l.sum() / (mlm_weights_x.sum() + 1e-8)
# 计算NSP的损失
nsp_l = loss(nsp_y_hat, nsp_y)
# 最终损失
l = mlm_l + nsp_l
return mlm_l, nsp_l, l
# 该函数定义了在WikiText-2数据集上预训练BERT(net)的过程
# 参数num_steps指定了 训练的迭代步数
def train_bert(train_iter, net, loss, vocab_size, devices, num_steps):
net = nn.DataParallel(net, device_ids=devices).to(devices[0])
trainer = torch.optim.Adam(net.parameters(), lr = 0.01)
step, timer = 0, d2l.Timer()
animator = d2l.Animator(xlabel='step', ylabel='loss',
xlim=[1, num_steps], legend=['mlm', 'nsp'])
# masked-LM损失的和,NSP损失的和,句子对的数量,计数
metric = d2l.Accumulator(4)
num_steps_reached = False
while step < num_steps and not num_steps_reached:
for tokens_x, segments_x, valid_lens_x, pred_positions_x, \
mlm_weights_x, mlm_y, nsp_y in train_iter:
tokens_x = tokens_x.to(devices[0])
segments_x = segments_x.to(devices[0])
valid_lens_x = valid_lens_x.to(devices[0])
pred_positions_x = pred_positions_x.to(devices[0])
mlm_weights_x = mlm_weights_x.to(devices[0])
mlm_y, nsp_y = mlm_y.to(devices[0]), nsp_y.to(devices[0])
trainer.zero_grad()
timer.start()
mlm_l, nsp_l, l = _get_batch_loss_bert(
net, loss, vocab_size, tokens_x, segments_x, valid_lens_x,
pred_positions_x, mlm_weights_x, mlm_y, nsp_y)
l.backward()
trainer.step()
metric.add(mlm_l, nsp_l, tokens_x.shape[0], 1)
timer.stop()
animator.add(step+1, (metric[0] / metric[3], metric[1] / metric[3]))
step += 1
if step == num_steps:
num_steps_reached = True
break
print(f'MLM loss {metric[0] / metric[3]:.3f},'
f'NSP loss {metric[1] / metric[3]:.3f}')
print(f'{metric[2] / timer.sum():.1f} sentence pairs/sec on '
f'{str(devices)}')
train_bert(train_iter, net, loss, len(vocab), devices, 80)
MLM loss 5.289,NSP loss 0.718
7336.8 sentence pairs/sec on [device(type='cuda', index=0)]
def get_bert_encoding(net, tokens_a, tokens_b=None):
tokens, segments = d2l.get_tokens_and_segments(tokens_a, tokens_b)
tokens_ids = torch.tensor(vocab[tokens], device=devices[0]).unsqueeze(0)
segments = torch.tensor(segments, device=devices[0]).unsqueeze(0)
valid_len = torch.tensor(len(tokens), device=devices[0]).unsqueeze(0)
encoded_x, _, _ = net(tokens_ids, segments, valid_len)
return encoded_x
# 对于句子“a crane is flying”,打印出该词元的bert表示的前三个元素
tokens_a = ['a', 'crane', 'is', 'flying']
encoded_text = get_bert_encoding(net, tokens_a) # torch.Size([1, 6, 128])
# 词元:'', 'a', 'crane', 'is', 'flying', ''
encoded_text_cls = encoded_text[:, 0, :]
encoded_text_crane = encoded_text[:, 2, :]
encoded_text.shape, encoded_text_cls.shape, encoded_text_crane[0][:3]
(torch.Size([1, 6, 128]),
torch.Size([1, 128]),
tensor([-0.6029, 1.5879, -2.2356], device='cuda:0', grad_fn=))
# 对于句子“a crane driver came”和“he just left”.
# encoded_pair[:,0,:]是来自预训练bert的整个句子对的编码结果
tokens_a, tokens_b = ['a', 'crane', 'driver', 'came'], ['he', 'just', 'left']
encoded_pair = get_bert_encoding(net, tokens_a, tokens_b)
# 词元::'', 'a', 'crane', 'driver', 'came', '', 'he', 'just', 'left', ''
encoded_pair_cls = encoded_pair[:, 0, :]
encoded_pair_crane = encoded_pair[:, 2, :]
encoded_pair.shape, encoded_pair_cls.shape, encoded_pair_crane[0][:3]
(torch.Size([1, 10, 128]),
torch.Size([1, 128]),
tensor([-0.7168, 1.8172, -0.0902], device='cuda:0', grad_fn=))