PyTorch的BERT微调

转译自:https://mccormickml.com/2019/07/22/BERT-fine-tuning/#21-download--extract
https://colab.research.google.com/drive/1pTuQhug6Dhl9XalKB0zUGf4FIdYFlpcX

这篇文章提出了两种形式,作为一个博客张贴在这里作为一个Colab Notebook 在这里。

两者的内容相同,但是:

  • 该博客文章包括一个用于讨论的评论部分。
  • Colab Notebook将允许您在阅读过程中运行代码并进行检查。

介绍

历史

2018年是NLP突破的一年。Transfer learning,尤其是像Allen AI的ELMO,OpenAI的Open-GPT和Google的BERT这样的模型,研究人员可以通过最小的任务特定的微调来粉碎多个基准,并为NLP社区的其他成员提供可以轻松地(减少数据量)的预训练模型和更少的计算时间进行微调并实施以产生最新的结果。不幸的是,对于许多NLP入门者,甚至对于一些经验丰富的从业者,这些强大模型的理论和实际应用仍未得到很好的理解。

什么是BERT?

BERT(《变形金刚的双向编码器表示法》)于2018年末发布,是我们将在本教程中使用的模型,旨在为读者提供更好的理解以及在NLP中使用转移学习模型的实用指南。BERT是一种预训练语言表示形式的方法,用于创建模型,NLP实践者可以免费下载和使用这些模型。您可以使用这些模型从文本数据中提取高质量的语言功能,也可以使用自己的数据针对特定任务(分类,实体识别,问题回答等)对这些模型进行微调,以生成状态信息。艺术预测。

这篇文章将解释如何修改和微调BERT,以创建功能强大的NLP模型,从而快速为您提供最新的结果。

微调(Fine-Tuning)的优点

在本教程中,我们将使用BERT训练文本分类器。具体来说,我们将采用预训练的BERT模型,最后在其上添加未经训练的神经元层,并为我们的分类任务训练新模型。为什么要这样做而不是训练一个非常适合您需要的特定NLP任务的特定深度学习模型(CNN,BiLSTM等)?

  1. 更快的发展

    • 首先,经过预训练的BERT模型权重已经编码了很多有关我们语言的信息。因此,训练我们的微调模型所需的时间要少得多-好像我们已经对网络的底层进行了广泛的训练,只需要在使用它们的输出作为分类任务的功能时就对其进行微调。实际上,作者建议只训练2-4个时期,以针对特定的NLP任务微调BERT(相比之下,从头开始训练原始BERT模型或LSTM则需要数百个GPU小时!)。
  2. 数据少

    • 另外,也许同样重要,由于预训练的权重,该方法使我们可以在比从头开始构建的模型小的数据集上微调任务。从头开始构建的NLP模型的主要缺点是,我们经常需要数量庞大的数据集,以训练我们的网络达到合理的准确性,这意味着必须在数据集创建上投入大量时间和精力。通过对BERT进行微调,我们现在能够摆脱对模型的训练,在更少的训练数据上获得良好的性能。
  3. 更好的结果

    • 最后,显示了此简单的微调过程(通常在BERT上添加一个完全连接的层并训练了几个纪元),可以通过对各种任务进行最少的特定于任务的调整来实现最新的结果:分类,语言推断,语义相似性,问题解答等。与其实施定制的,有时模糊的架构以证明在特定任务上能很好地工作,不如对BERT进行微调是一种更好(或至少相等)的选择。

NLP的转变

这种转移学习的转变与几年前在计算机视觉领域发生的相同转变相似。为计算机视觉任务创建良好的深度学习网络可能需要数百万个参数,并且训练成本非常高。研究人员发现,深层网络学习分层的特征表示(简单的特征,如最低层的边缘,逐渐变得更复杂的高层)。不必每次都从头开始训练新的网络,而是可以复制并转移具有通用图像特征的训练后网络的较低层,以供具有不同任务的另一个网络使用。很快,下载预训练的深度网络并快速对其进行重新训练以进行新任务或在其上添加其他层已成为一种惯例,这比从头开始训练网络的昂贵过程要好得多。

让我们开始吧!

1 设定

1.1 使用Colab GPU进行训练

Google Colab提供免费的GPU和TPU!由于我们将训练大型神经网络,因此最好利用这一点(在这种情况下,我们 会附加GPU),否则训练将需要很长时间。

可以通过以下菜单添加GPU:

Edit Notebook Settings Hardware accelerator (GPU)

然后运行以下单元以确认检测到GPU。

import tensorflow as tf

# Get the GPU device name.
device_name = tf.test.gpu_device_name()

# The device name should look like the following:
if device_name == '/device:GPU:0':
    print('Found GPU at: {}'.format(device_name))
else:
    raise SystemError('GPU device not found')

Found GPU at: /device:GPU:0

为了使torch能够使用GPU,我们需要识别并指定GPU作为设备。稍后,在训练循环中,我们将数据加载到设备上。

import torch

# If there's a GPU available...
if torch.cuda.is_available():    

    # Tell PyTorch to use the GPU.    
    device = torch.device("cuda")

    print('There are %d GPU(s) available.' % torch.cuda.device_count())

    print('We will use the GPU:', torch.cuda.get_device_name(0))

# If not...
else:
    print('No GPU available, using the CPU instead.')
    device = torch.device("cpu")

There are 1 GPU(s) available.
We will use the GPU: Tesla P100-PCIE-16GB

1.2 安装Hugging Face库

接下来,让我们安装Hugging Face 的Transformers软件包,这将为我们提供一个与BERT一起工作的pytorch接口。(此库包含用于其他预训练语言模型的接口,例如OpenAI的GPT和GPT-2。)我们选择pytorch接口是因为它在高级API之间实现了很好的平衡(这些API易于使用,但不能提供见解)以及工作原理)和tensorflow代码(包含很多细节,但是当我们的目的是BERT!时,常常使我们陷入有关tensorflow的课程)。

目前,Hugging Face库似乎是使用BERT的最广泛接受且功能最强大的pytorch接口。除了支持各种不同的预训练变压器模型外,该库还包括针对您的特定任务对这些模型进行的预构建修改。例如,在本教程中,我们将使用BertForSequenceClassification

该库还包括用于令牌分类,问题解答,下一句动词等的特定于任务的类。使用这些预构建的类可简化为您目的修改BERT的过程。

!pip install transformers

Collecting transformers Downloading [https://files.pythonhosted.org/packages/88/b1/41130a228dd656a1a31ba281598a968320283f48d42782845f6ba567f00b/transformers-4.2.2-py3-none-any.whl](https://files.pythonhosted.org/packages/88/b1/41130a228dd656a1a31ba281598a968320283f48d42782845f6ba567f00b/transformers-4.2.2-py3-none-any.whl) (1.8MB) |████████████████████████████████| 1.8MB 13.8MB/s Collecting sacremoses Downloading [https://files.pythonhosted.org/packages/7d/34/09d19aff26edcc8eb2a01bed8e98f13a1537005d31e95233fd48216eed10/sacremoses-0.0.43.tar.gz](https://files.pythonhosted.org/packages/7d/34/09d19aff26edcc8eb2a01bed8e98f13a1537005d31e95233fd48216eed10/sacremoses-0.0.43.tar.gz) (883kB) |████████████████████████████████| 890kB 39.5MB/s Requirement already satisfied: tqdm>=4.27 in /usr/local/lib/python3.6/dist-packages (from transformers) (4.41.1) Requirement already satisfied: importlib-metadata; python_version < "3.8" in /usr/local/lib/python3.6/dist-packages (from transformers) (3.4.0) Requirement already satisfied: requests in /usr/local/lib/python3.6/dist-packages (from transformers) (2.23.0) Requirement already satisfied: filelock in /usr/local/lib/python3.6/dist-packages (from transformers) (3.0.12) Requirement already satisfied: numpy in /usr/local/lib/python3.6/dist-packages (from transformers) (1.19.5) Requirement already satisfied: packaging in /usr/local/lib/python3.6/dist-packages (from transformers) (20.8) Requirement already satisfied: regex!=2019.12.17 in /usr/local/lib/python3.6/dist-packages (from transformers) (2019.12.20) Requirement already satisfied: dataclasses; python_version < "3.7" in /usr/local/lib/python3.6/dist-packages (from transformers) (0.8) Collecting tokenizers==0.9.4 Downloading [https://files.pythonhosted.org/packages/0f/1c/e789a8b12e28be5bc1ce2156cf87cb522b379be9cadc7ad8091a4cc107c4/tokenizers-0.9.4-cp36-cp36m-manylinux2010_x86_64.whl](https://files.pythonhosted.org/packages/0f/1c/e789a8b12e28be5bc1ce2156cf87cb522b379be9cadc7ad8091a4cc107c4/tokenizers-0.9.4-cp36-cp36m-manylinux2010_x86_64.whl) (2.9MB) |████████████████████████████████| 2.9MB 45.1MB/s Requirement already satisfied: six in /usr/local/lib/python3.6/dist-packages (from sacremoses->transformers) (1.15.0) Requirement already satisfied: click in /usr/local/lib/python3.6/dist-packages (from sacremoses->transformers) (7.1.2) Requirement already satisfied: joblib in /usr/local/lib/python3.6/dist-packages (from sacremoses->transformers) (1.0.0) Requirement already satisfied: zipp>=0.5 in /usr/local/lib/python3.6/dist-packages (from importlib-metadata; python_version < "3.8"->transformers) (3.4.0) Requirement already satisfied: typing-extensions>=3.6.4; python_version < "3.8" in /usr/local/lib/python3.6/dist-packages (from importlib-metadata; python_version < "3.8"->transformers) (3.7.4.3) Requirement already satisfied: urllib3!=1.25.0,!=1.25.1,<1.26,>=1.21.1 in /usr/local/lib/python3.6/dist-packages (from requests->transformers) (1.24.3) Requirement already satisfied: chardet<4,>=3.0.2 in /usr/local/lib/python3.6/dist-packages (from requests->transformers) (3.0.4) Requirement already satisfied: idna<3,>=2.5 in /usr/local/lib/python3.6/dist-packages (from requests->transformers) (2.10) Requirement already satisfied: certifi>=2017.4.17 in /usr/local/lib/python3.6/dist-packages (from requests->transformers) (2020.12.5) Requirement already satisfied: pyparsing>=2.0.2 in /usr/local/lib/python3.6/dist-packages (from packaging->transformers) (2.4.7) Building wheels for collected packages: sacremoses Building wheel for sacremoses (setup.py) ... done Created wheel for sacremoses: filename=sacremoses-0.0.43-cp36-none-any.whl size=893261 sha256=b9075f9270d81107a4a6b8ad48c46bdeee2639ee5a3af965759180bb49e1a668 Stored in directory: /root/.cache/pip/wheels/29/3c/fd/7ce5c3f0666dab31a50123635e6fb5e19ceb42ce38d4e58f45 Successfully built sacremoses Installing collected packages: sacremoses, tokenizers, transformers Successfully installed sacremoses-0.0.43 tokenizers-0.9.4 transformers-4.2.2

实际上,此notebook 的代码是从huggingface生成的run_glue.py示例脚本的简化版本。
run_glue.py是一个有用的实用程序,它使您可以选择要运行的GLUE基准测试任务以及要使用的预训练模型(您可以在此处查看可能模型的列表)。它还支持使用CPU,单个GPU或多个GPU。如果您想进一步提高速度,它甚至支持使用16位精度。

不幸的是,所有这些可配置性是以可读性为代价的。在notebook中,我们大大简化了代码,并添加了许多注释以使内容清晰明了。

2. 加载CoLA数据集

我们将使用语言可接受性语料库(CoLA)数据集进行单句分类。这是一组标记为语法正确或不正确的句子。它于2018年5月首次发布,是“ GLUE Benchmark”中包含的测试之一,像BERT这样的模型正在竞争。

2.1 下载并解压

我们将使用该wget软件包将数据集下载到Colab实例的文件系统中。

!pip install wget

Collecting wget
  Downloading https://files.pythonhosted.org/packages/47/6a/62e288da7bcda82b935ff0c6cfe542970f04e29c756b0e147251b2fb251f/wget-3.2.zip
Building wheels for collected packages: wget
  Building wheel for wget (setup.py) ... �[?25l�[?25hdone
  Created wheel for wget: filename=wget-3.2-cp36-none-any.whl size=9681 sha256=988b5f3cabb3edeed6a46e989edefbdecc1d5a591f9d38754139f994fc00be8d
  Stored in directory: /root/.cache/pip/wheels/40/15/30/7d8f7cea2902b4db79e3fea550d7d7b85ecb27ef992b618f3f
Successfully built wget
Installing collected packages: wget
Successfully installed wget-3.2

该数据集托管在此仓库中的GitHub上: https://nyu-mll.github.io/CoLA/

import wget
import os

print('Downloading dataset...')

# The URL for the dataset zip file.
url = 'https://nyu-mll.github.io/CoLA/cola_public_1.1.zip'

# Download the file (if we haven't already)
if not os.path.exists('./cola_public_1.1.zip'):
    wget.download(url, './cola_public_1.1.zip')

Downloading dataset...

将数据集解压缩到文件系统。您可以在左侧的侧栏中浏览Colab实例的文件系统。

# Unzip the dataset (if we haven't already)
if not os.path.exists('./cola_public/'):
    !unzip cola_public_1.1.zip

Archive:  cola_public_1.1.zip
   creating: cola_public/
  inflating: cola_public/README      
   creating: cola_public/tokenized/
  inflating: cola_public/tokenized/in_domain_dev.tsv  
  inflating: cola_public/tokenized/in_domain_train.tsv  
  inflating: cola_public/tokenized/out_of_domain_dev.tsv  
   creating: cola_public/raw/
  inflating: cola_public/raw/in_domain_dev.tsv  
  inflating: cola_public/raw/in_domain_train.tsv  
  inflating: cola_public/raw/out_of_domain_dev.tsv  

2.2 解析

从文件名我们可以看到,数据tokenizedraw版本均可用。

我们不能使用pre-tokenized的版本,因为,为了应用预先训练的BERT,我们必须使用模型提供的标记程序。这是因为(1)模型具有特定的固定词汇表,并且(2)BERT令牌生成器具有处理词汇外单词的特定方式。

我们将使用pandas来解析“in-domain”训练集,并查看其一些属性和数据点。

import pandas as pd

# Load the dataset into a pandas dataframe.
df = pd.read_csv("./cola_public/raw/in_domain_train.tsv", delimiter='\t', header=None, names=['sentence_source', 'label', 'label_notes', 'sentence'])

# Report the number of sentences.
print('Number of training sentences: {:,}\n'.format(df.shape[0]))

# Display 10 random rows from the data.
df.sample(10)

Number of training sentences: 8,551

句子来源 标签 label_notes 句子
8200 ad03 1 N 他们踢自己
3862 ks08 1 N 一只大绿色昆虫飞进汤里。
8298 ad03 1 N 我经常感冒。
6542 g_81 0 * 您买了哪本桌子配套的书?
722 bc01 0 * 约翰不在家了。
3693 ks08 1 N 我认为上周遇到的那个人太疯狂了。
6283 c_13 1 N 凯瑟琳真的很讨厌她的工作。
4118 ks08 1 N 不要在开始时使用这些单词。
2592 93号 1 N 杰西卡在桌子下面喷漆。
8194 ad03 0 * 我把她送走了。

我们实际上关心的两个属性是the sentence和its label,这被称为“可接受性判断”(0 =不可接受,1 =可接受)。

这是被标记为在语法上不可接受的五个句子。请注意,此任务比情感分析之类的任务难得多!

df.loc[df.label == 0].sample(5)[['sentence', 'label']]

句子 标签
4867 他们调查了。 0
200 他读的书越多,我想读的书就越多。 0
4593 任何斑马都不能飞。 0
3226 城市容易毁灭。 0
7337 时间一天过去了。 0

让我们将训练集的句子和标签提取为numpy ndarrays。

# Get the lists of sentences and their labels.
sentences = df.sentence.values
labels = df.label.values

3 标记化和输入格式

在本节中,我们将数据集转换为可以训练BERT的格式。

3.1 BERT令牌生成器

要将文本提供给BERT,必须将其拆分为令牌,然后将这些令牌映射到令牌生成器词汇表中的索引。

令牌化必须由BERT附带的令牌化程序执行-下方的单元格将为我们下载。我们将在这里使用“无大小写”版本。

from transformers import BertTokenizer

# Load the BERT tokenizer.
print('Loading BERT tokenizer...')
tokenizer = BertTokenizer.from_pretrained('bert-base-uncased', do_lower_case=True)

Loading BERT tokenizer...

HBox(children=(IntProgress(value=0, description='Downloading', max=231508, style=ProgressStyle(description_wid…

让我们将tokenizer应用于一个句子以查看输出。

# Print the original sentence.
print(' Original: ', sentences[0])

# Print the sentence split into tokens.
print('Tokenized: ', tokenizer.tokenize(sentences[0]))

# Print the sentence mapped to token ids.
print('Token IDs: ', tokenizer.convert_tokens_to_ids(tokenizer.tokenize(sentences[0])))

 Original:  Our friends won't buy this analysis, let alone the next one we propose.
Tokenized:  ['our', 'friends', 'won', "'", 't', 'buy', 'this', 'analysis', ',', 'let', 'alone', 'the', 'next', 'one', 'we', 'propose', '.']
Token IDs:  [2256, 2814, 2180, 1005, 1056, 4965, 2023, 4106, 1010, 2292, 2894, 1996, 2279, 2028, 2057, 16599, 1012]

当我们实际转换所有句子时,我们将使用该tokenize.encode函数来处理这两个步骤,而不是分别进行调用tokenize和处理convert_tokens_to_ids

但是,在执行此操作之前,我们需要讨论一些BERT的格式要求。

3.2 所需格式

上面的代码省略了一些必需的格式化步骤,我们将在这里进行介绍。

旁注:对我来说,BERT的输入格式似乎“过度指定” ...我们需要给它提供一些似乎多余的信息,或者就像我们可以在不明确提供的情况下很容易从数据中推断出这些信息一样。但这就是事实,我怀疑一旦我对BERT内部有了更深入的了解,它将更有意义。

我们必须:

  1. 在每个句子的开头和结尾添加特殊标记。
  2. 将所有句子填充并截断为单个恒定长度。
  3. 使用“注意掩码”明确区分真实令牌和填充令牌。

特殊令牌

[SEP]

在每个句子的末尾,我们需要附加特殊[SEP]标记。

该令牌是两句任务的产物,其中给BERT两个单独的句子并要求确定某些内容(例如,能否在句子B中找到对句子A中问题的答案?)。

我还不确定为什么只有单句输入时仍需要令牌,但确实如此!

[CLS]

对于分类任务,我们必须在[CLS]每个句子的开头添加特殊标记。

此令牌具有特殊意义。BERT由12个Transformer层组成。每个转换器接收一个令牌嵌入列表,并在输出上产生相同数量的嵌入(当然,特征值也会改变!)。

Special Tokens

在最终(第12个)转换器的输出上,分类器仅使用第一个嵌入(对应于[CLS]令牌)

“每个序列的第一个标记始终是特殊的分类标记([CLS])。与此令牌对应的最终隐藏状态用作分类任务的合计序列表示。” (摘自BERT论文)

您可能会考虑在最终的嵌入中尝试一些合并策略,但这不是必需的。因为BERT被训练为仅使用此[CLS]令牌进行分类,所以我们知道该模型已被激励将分类步骤所需的所有内容编码到单个768值嵌入向量中。已经为我们完成了汇总!

句子长度与注意掩码

我们数据集中的句子显然具有不同的长度,那么BERT如何处理呢?

BERT有两个制约因素:

  1. 所有句子都必须填充或截断为单个固定长度。
  2. 句子的最大长度为512个令牌。

使用特殊[PAD]令牌完成填充,该令牌在BERT词汇表中的索引为0处。下图演示了填充到8个令牌的“ MAX_LEN”。

Sentence Length & Attention Mask

“注意掩码”只是一个1和0的数组,指示哪些令牌是填充的,哪些不是填充的(似乎有点多余,不是吗?!)。此掩码告诉BERT中的“自我注意”机制不要将这些PAD令牌纳入其对句子的解释中。

但是,最大长度确实会影响训练和评估速度。例如,使用Tesla K80:

MAX_LEN = 128 --> Training epochs take ~5:28 each

MAX_LEN = 64 --> Training epochs take ~2:57 each

3.3 标记化数据集

转换器库提供了一个有用的encode功能,它将为我们处理大部分解析和数据准备步骤。

但是,在准备对文本进行编码之前,我们需要确定用于填充/截断的最大句子长度

下面的单元格将对数据集执行一次标记化过程,以测量最大句子长度。

max_len = 0

# For every sentence...
for sent in sentences:

    # Tokenize the text and add `[CLS]` and `[SEP]` tokens.
    input_ids = tokenizer.encode(sent, add_special_tokens=True)

    # Update the maximum sentence length.
    max_len = max(max_len, len(input_ids))

print('Max sentence length: ', max_len)

Max sentence length:  47

以防万一有更长的测试句子,我将最大长度设置为64。

现在,我们准备执行真正的标记化。

tokenizer.encode_plus功能为我们结合了多个步骤:

  1. 将句子拆分为标记。
  2. 添加特殊[CLS][SEP]令牌。
  3. 将令牌映射到其ID。
  4. 将所有句子填充或截断为相同长度。
  5. 创建注意掩码,以明确区分真实令牌和[PAD]令牌。

前四个功能在中tokenizer.encode,但我tokenizer.encode_plus用来获取第五个功能(注意蒙版)。文档在这里。

# Tokenize all of the sentences and map the tokens to thier word IDs.
input_ids = []
attention_masks = []

# For every sentence...
for sent in sentences:
    # `encode_plus` will:
    #   (1) Tokenize the sentence.
    #   (2) Prepend the `[CLS]` token to the start.
    #   (3) Append the `[SEP]` token to the end.
    #   (4) Map tokens to their IDs.
    #   (5) Pad or truncate the sentence to `max_length`
    #   (6) Create attention masks for [PAD] tokens.
    encoded_dict = tokenizer.encode_plus(
                        sent,                      # Sentence to encode.
                        add_special_tokens = True, # Add '[CLS]' and '[SEP]'
                        max_length = 64,           # Pad & truncate all sentences.
                        pad_to_max_length = True,
                        return_attention_mask = True,   # Construct attn. masks.
                        return_tensors = 'pt',     # Return pytorch tensors.
                   )

    # Add the encoded sentence to the list.    
    input_ids.append(encoded_dict['input_ids'])

    # And its attention mask (simply differentiates padding from non-padding).
    attention_masks.append(encoded_dict['attention_mask'])

# Convert the lists into tensors.
input_ids = torch.cat(input_ids, dim=0)
attention_masks = torch.cat(attention_masks, dim=0)
labels = torch.tensor(labels)

# Print sentence 0, now as a list of IDs.
print('Original: ', sentences[0])
print('Token IDs:', input_ids[0])

Original:  Our friends won't buy this analysis, let alone the next one we propose.
Token IDs: tensor([  101,  2256,  2814,  2180,  1005,  1056,  4965,  2023,  4106,  1010,
         2292,  2894,  1996,  2279,  2028,  2057, 16599,  1012,   102,     0,
            0,     0,     0,     0,     0,     0,     0,     0,     0,     0,
            0,     0,     0,     0,     0,     0,     0,     0,     0,     0,
            0,     0,     0,     0,     0,     0,     0,     0,     0,     0,
            0,     0,     0,     0,     0,     0,     0,     0,     0,     0,
            0,     0,     0,     0])

3.4 训练和验证拆分

将我们的训练集划分为90%用于训练,而10%用于验证。

from torch.utils.data import TensorDataset, random_split

# Combine the training inputs into a TensorDataset.
dataset = TensorDataset(input_ids, attention_masks, labels)

# Create a 90-10 train-validation split.

# Calculate the number of samples to include in each set.
train_size = int(0.9 * len(dataset))
val_size = len(dataset) - train_size

# Divide the dataset by randomly selecting samples.
train_dataset, val_dataset = random_split(dataset, [train_size, val_size])

print('{:>5,} training samples'.format(train_size))
print('{:>5,} validation samples'.format(val_size))

7,695 training samples
  856 validation samples

我们还将使用火炬DataLoader类为数据集创建一个迭代器。这有助于在训练过程中节省内存,因为与for循环不同,使用迭代器不需要将整个数据集加载到内存中。

from torch.utils.data import DataLoader, RandomSampler, SequentialSampler

# The DataLoader needs to know our batch size for training, so we specify it 
# here. For fine-tuning BERT on a specific task, the authors recommend a batch 
# size of 16 or 32.
batch_size = 32

# Create the DataLoaders for our training and validation sets.
# We'll take training samples in random order. 
train_dataloader = DataLoader(
            train_dataset,  # The training samples.
            sampler = RandomSampler(train_dataset), # Select batches randomly
            batch_size = batch_size # Trains with this batch size.
        )

# For validation the order doesn't matter, so we'll just read them sequentially.
validation_dataloader = DataLoader(
            val_dataset, # The validation samples.
            sampler = SequentialSampler(val_dataset), # Pull out batches sequentially.
            batch_size = batch_size # Evaluate with this batch size.
        )

4 训练我们的分类模型

现在我们的输入数据已正确格式化,是时候对BERT模型进行微调了。

4.1 BertForSequenceClassification

对于此任务,我们首先要修改预训练的BERT模型以提供输出以进行分类,然后我们要继续在数据集上训练模型,直到整个模型(端到端)都非常适合我们的任务。

值得庆幸的是,pyconch拥抱实现包括一组为各种NLP任务设计的接口。尽管这些接口都是基于受过训练的BERT模型构建的,但每个接口都有不同的顶层和输出类型,旨在适应其特定的NLP任务。

这是用于微调的当前类列表:

  • 伯特模型
  • BertForPreTraining
  • 蒙面人
  • BertForNext句子预测
  • BertForSequenceClassification-我们将使用的一种。
  • BertForTokenClassification
  • BertForQuestionAnswering

这些文档可在此处找到。

我们将使用BertForSequenceClassification。这是正常的BERT模型,在模型的顶部添加了单个线性层用于分类,我们将其用作句子分类器。当我们输入输入数据时,整个预训练的BERT模型和其他未训练的分类层都将根据我们的特定任务进行训练。

好的,让我们加载BERT!有几种不同的预训练BERT模型可用。“ bert-base-uncased”表示仅具有小写字母的版本(“ uncased”),并且是两者的较小版本(“ base”与“ large”)。

对于文档from_pretrained,可以发现在这里,与定义的附加参数在这里。

from transformers import BertForSequenceClassification, AdamW, BertConfig

# Load BertForSequenceClassification, the pretrained BERT model with a single 
# linear classification layer on top. 
model = BertForSequenceClassification.from_pretrained(
    "bert-base-uncased", # Use the 12-layer BERT model, with an uncased vocab.
    num_labels = 2, # The number of output labels--2 for binary classification.
                    # You can increase this for multi-class tasks.   
    output_attentions = False, # Whether the model returns attentions weights.
    output_hidden_states = False, # Whether the model returns all hidden-states.
)

# Tell pytorch to run this model on the GPU.
model.cuda()

[I've removed this output cell for brevity].

出于好奇,我们可以在此处按名称浏览所有模型的参数。

在下面的单元格中,我已打印出以下砝码的名称和尺寸:

  1. 嵌入层。
  2. 十二个变压器中的第一个。
  3. 输出层。
# Get all of the model's parameters as a list of tuples.
params = list(model.named_parameters())

print('The BERT model has {:} different named parameters.\n'.format(len(params)))

print('==== Embedding Layer ====\n')

for p in params[0:5]:
    print("{:<55} {:>12}".format(p[0], str(tuple(p[1].size()))))

print('\n==== First Transformer ====\n')

for p in params[5:21]:
    print("{:<55} {:>12}".format(p[0], str(tuple(p[1].size()))))

print('\n==== Output Layer ====\n')

for p in params[-4:]:
    print("{:<55} {:>12}".format(p[0], str(tuple(p[1].size()))))

The BERT model has 201 different named parameters.

==== Embedding Layer ====

bert.embeddings.word_embeddings.weight                  (30522, 768)
bert.embeddings.position_embeddings.weight                (512, 768)
bert.embeddings.token_type_embeddings.weight                (2, 768)
bert.embeddings.LayerNorm.weight                              (768,)
bert.embeddings.LayerNorm.bias                                (768,)

==== First Transformer ====

bert.encoder.layer.0.attention.self.query.weight          (768, 768)
bert.encoder.layer.0.attention.self.query.bias                (768,)
bert.encoder.layer.0.attention.self.key.weight            (768, 768)
bert.encoder.layer.0.attention.self.key.bias                  (768,)
bert.encoder.layer.0.attention.self.value.weight          (768, 768)
bert.encoder.layer.0.attention.self.value.bias                (768,)
bert.encoder.layer.0.attention.output.dense.weight        (768, 768)
bert.encoder.layer.0.attention.output.dense.bias              (768,)
bert.encoder.layer.0.attention.output.LayerNorm.weight        (768,)
bert.encoder.layer.0.attention.output.LayerNorm.bias          (768,)
bert.encoder.layer.0.intermediate.dense.weight           (3072, 768)
bert.encoder.layer.0.intermediate.dense.bias                 (3072,)
bert.encoder.layer.0.output.dense.weight                 (768, 3072)
bert.encoder.layer.0.output.dense.bias                        (768,)
bert.encoder.layer.0.output.LayerNorm.weight                  (768,)
bert.encoder.layer.0.output.LayerNorm.bias                    (768,)

==== Output Layer ====

bert.pooler.dense.weight                                  (768, 768)
bert.pooler.dense.bias                                        (768,)
classifier.weight                                           (2, 768)
classifier.bias                                                 (2,)

4.2 优化器和学习率调度器

现在已经加载了模型,我们需要从存储的模型中获取训练超参数。

为了进行微调,作者建议从以下值中进行选择(来自BERT论文的附录A.3 ):

  • 批次大小: 16、32
  • 学习率(亚当): 5e-5,3e-5,2e-5
  • 纪元数: 2,3,4

我们选择了:

  • 批量大小:32(在创建我们的DataLoader时设置)
  • 学习率:2e-5
  • 纪元:4(我们会发现这可能太多了……)

epsilon参数eps = 1e-8是“一个很小的数字,以防止在实现中被零除”(从此处开始)。

您可以在run_glue.py 此处找到AdamW优化器的创建。

# Note: AdamW is a class from the huggingface library (as opposed to pytorch) 
# I believe the 'W' stands for 'Weight Decay fix"
optimizer = AdamW(model.parameters(),
                  lr = 2e-5, # args.learning_rate - default is 5e-5, our notebook had 2e-5
                  eps = 1e-8 # args.adam_epsilon  - default is 1e-8.
                )

from transformers import get_linear_schedule_with_warmup

# Number of training epochs. The BERT authors recommend between 2 and 4\. 
# We chose to run for 4, but we'll see later that this may be over-fitting the
# training data.
epochs = 4

# Total number of training steps is [number of batches] x [number of epochs]. 
# (Note that this is not the same as the number of training samples).
total_steps = len(train_dataloader) * epochs

# Create the learning rate scheduler.
scheduler = get_linear_schedule_with_warmup(optimizer, 
                                            num_warmup_steps = 0, # Default value in run_glue.py
                                            num_training_steps = total_steps)

4.3 训练循环

以下是我们的训练循环。发生了很多事情,但是从根本上说,对于循环中的每个遍,我们都有一个Trianing阶段和一个Validation阶段。

感谢Stas Bekman为利用验证丢失来检测过度拟合提供的见解和代码!

训练:

  • 打开我们的数据输入和标签的包装
  • 将数据加载到GPU上进行加速
  • 清除上一遍计算出的梯度。
    • 在pytorch中,默认情况下会累积渐变(对于RNN等很有用),除非您明确将其清除。
  • 正向传递(通过网络馈送输入数据)
  • 向后传递(反向传播)
  • 告诉网络使用optimizer.step()更新参数
  • 跟踪变量以监视进度

评估:

  • 打开我们的数据输入和标签的包装
  • 将数据加载到GPU上进行加速
  • 正向传递(通过网络馈送输入数据)
  • 计算我们的验证数据损失并跟踪变量以监控进度

Pytorch向我们隐藏了所有详细的计算,但是我们对代码进行了注释,以指出上面的哪些步骤在每一行上进行。

PyTorch还提供了一些初学者教程,您可能还会觉得有帮助。

定义用于计算准确性的辅助函数。

import numpy as np

# Function to calculate the accuracy of our predictions vs labels
def flat_accuracy(preds, labels):
    pred_flat = np.argmax(preds, axis=1).flatten()
    labels_flat = labels.flatten()
    return np.sum(pred_flat == labels_flat) / len(labels_flat)

辅助功能,用于将经过时间格式化为 hh:mm:ss

import time
import datetime

def format_time(elapsed):
    '''
    Takes a time in seconds and returns a string hh:mm:ss
    '''
    # Round to the nearest second.
    elapsed_rounded = int(round((elapsed)))

    # Format as hh:mm:ss
    return str(datetime.timedelta(seconds=elapsed_rounded))

我们准备开始训练!

import random
import numpy as np

# This training code is based on the `run_glue.py` script here:
# https://github.com/huggingface/transformers/blob/5bfcd0485ece086ebcbed2d008813037968a9e58/examples/run_glue.py#L128

# Set the seed value all over the place to make this reproducible.
seed_val = 42

random.seed(seed_val)
np.random.seed(seed_val)
torch.manual_seed(seed_val)
torch.cuda.manual_seed_all(seed_val)

# We'll store a number of quantities such as training and validation loss, 
# validation accuracy, and timings.
training_stats = []

# Measure the total training time for the whole run.
total_t0 = time.time()

# For each epoch...
for epoch_i in range(0, epochs):

    # ========================================
    #               Training
    # ========================================

    # Perform one full pass over the training set.

    print("")
    print('======== Epoch {:} / {:} ========'.format(epoch_i + 1, epochs))
    print('Training...')

    # Measure how long the training epoch takes.
    t0 = time.time()

    # Reset the total loss for this epoch.
    total_train_loss = 0

    # Put the model into training mode. Don't be mislead--the call to 
    # `train` just changes the *mode*, it doesn't *perform* the training.
    # `dropout` and `batchnorm` layers behave differently during training
    # vs. test (source: https://stackoverflow.com/questions/51433378/what-does-model-train-do-in-pytorch)
    model.train()

    # For each batch of training data...
    for step, batch in enumerate(train_dataloader):

        # Progress update every 40 batches.
        if step % 40 == 0 and not step == 0:
            # Calculate elapsed time in minutes.
            elapsed = format_time(time.time() - t0)

            # Report progress.
            print('  Batch {:>5,}  of  {:>5,}.    Elapsed: {:}.'.format(step, len(train_dataloader), elapsed))

        # Unpack this training batch from our dataloader. 
        #
        # As we unpack the batch, we'll also copy each tensor to the GPU using the 
        # `to` method.
        #
        # `batch` contains three pytorch tensors:
        #   [0]: input ids 
        #   [1]: attention masks
        #   [2]: labels 
        b_input_ids = batch[0].to(device)
        b_input_mask = batch[1].to(device)
        b_labels = batch[2].to(device)

        # Always clear any previously calculated gradients before performing a
        # backward pass. PyTorch doesn't do this automatically because 
        # accumulating the gradients is "convenient while training RNNs". 
        # (source: https://stackoverflow.com/questions/48001598/why-do-we-need-to-call-zero-grad-in-pytorch)
        model.zero_grad()        

        # Perform a forward pass (evaluate the model on this training batch).
        # The documentation for this `model` function is here: 
        # https://huggingface.co/transformers/v2.2.0/model_doc/bert.html#transformers.BertForSequenceClassification
        # It returns different numbers of parameters depending on what arguments
        # arge given and what flags are set. For our useage here, it returns
        # the loss (because we provided labels) and the "logits"--the model
        # outputs prior to activation.
        loss, logits = model(b_input_ids, 
                             token_type_ids=None, 
                             attention_mask=b_input_mask, 
                             labels=b_labels)

        # Accumulate the training loss over all of the batches so that we can
        # calculate the average loss at the end. `loss` is a Tensor containing a
        # single value; the `.item()` function just returns the Python value 
        # from the tensor.
        total_train_loss += loss.item()

        # Perform a backward pass to calculate the gradients.
        loss.backward()

        # Clip the norm of the gradients to 1.0.
        # This is to help prevent the "exploding gradients" problem.
        torch.nn.utils.clip_grad_norm_(model.parameters(), 1.0)

        # Update parameters and take a step using the computed gradient.
        # The optimizer dictates the "update rule"--how the parameters are
        # modified based on their gradients, the learning rate, etc.
        optimizer.step()

        # Update the learning rate.
        scheduler.step()

    # Calculate the average loss over all of the batches.
    avg_train_loss = total_train_loss / len(train_dataloader)            

    # Measure how long this epoch took.
    training_time = format_time(time.time() - t0)

    print("")
    print("  Average training loss: {0:.2f}".format(avg_train_loss))
    print("  Training epcoh took: {:}".format(training_time))

    # ========================================
    #               Validation
    # ========================================
    # After the completion of each training epoch, measure our performance on
    # our validation set.

    print("")
    print("Running Validation...")

    t0 = time.time()

    # Put the model in evaluation mode--the dropout layers behave differently
    # during evaluation.
    model.eval()

    # Tracking variables 
    total_eval_accuracy = 0
    total_eval_loss = 0
    nb_eval_steps = 0

    # Evaluate data for one epoch
    for batch in validation_dataloader:

        # Unpack this training batch from our dataloader. 
        #
        # As we unpack the batch, we'll also copy each tensor to the GPU using 
        # the `to` method.
        #
        # `batch` contains three pytorch tensors:
        #   [0]: input ids 
        #   [1]: attention masks
        #   [2]: labels 
        b_input_ids = batch[0].to(device)
        b_input_mask = batch[1].to(device)
        b_labels = batch[2].to(device)

        # Tell pytorch not to bother with constructing the compute graph during
        # the forward pass, since this is only needed for backprop (training).
        with torch.no_grad():        

            # Forward pass, calculate logit predictions.
            # token_type_ids is the same as the "segment ids", which 
            # differentiates sentence 1 and 2 in 2-sentence tasks.
            # The documentation for this `model` function is here: 
            # https://huggingface.co/transformers/v2.2.0/model_doc/bert.html#transformers.BertForSequenceClassification
            # Get the "logits" output by the model. The "logits" are the output
            # values prior to applying an activation function like the softmax.
            (loss, logits) = model(b_input_ids, 
                                   token_type_ids=None, 
                                   attention_mask=b_input_mask,
                                   labels=b_labels)

        # Accumulate the validation loss.
        total_eval_loss += loss.item()

        # Move logits and labels to CPU
        logits = logits.detach().cpu().numpy()
        label_ids = b_labels.to('cpu').numpy()

        # Calculate the accuracy for this batch of test sentences, and
        # accumulate it over all batches.
        total_eval_accuracy += flat_accuracy(logits, label_ids)

    # Report the final accuracy for this validation run.
    avg_val_accuracy = total_eval_accuracy / len(validation_dataloader)
    print("  Accuracy: {0:.2f}".format(avg_val_accuracy))

    # Calculate the average loss over all of the batches.
    avg_val_loss = total_eval_loss / len(validation_dataloader)

    # Measure how long the validation run took.
    validation_time = format_time(time.time() - t0)

    print("  Validation Loss: {0:.2f}".format(avg_val_loss))
    print("  Validation took: {:}".format(validation_time))

    # Record all statistics from this epoch.
    training_stats.append(
        {
            'epoch': epoch_i + 1,
            'Training Loss': avg_train_loss,
            'Valid. Loss': avg_val_loss,
            'Valid. Accur.': avg_val_accuracy,
            'Training Time': training_time,
            'Validation Time': validation_time
        }
    )

print("")
print("Training complete!")

print("Total training took {:} (h:mm:ss)".format(format_time(time.time()-total_t0)))

======== Epoch 1 / 4 ========
Training...
  Batch    40  of    241\.    Elapsed: 0:00:08.
  Batch    80  of    241\.    Elapsed: 0:00:17.
  Batch   120  of    241\.    Elapsed: 0:00:25.
  Batch   160  of    241\.    Elapsed: 0:00:34.
  Batch   200  of    241\.    Elapsed: 0:00:42.
  Batch   240  of    241\.    Elapsed: 0:00:51.

  Average training loss: 0.50
  Training epcoh took: 0:00:51

Running Validation...
  Accuracy: 0.80
  Validation Loss: 0.45
  Validation took: 0:00:02

======== Epoch 2 / 4 ========
Training...
  Batch    40  of    241\.    Elapsed: 0:00:08.
  Batch    80  of    241\.    Elapsed: 0:00:17.
  Batch   120  of    241\.    Elapsed: 0:00:25.
  Batch   160  of    241\.    Elapsed: 0:00:34.
  Batch   200  of    241\.    Elapsed: 0:00:42.
  Batch   240  of    241\.    Elapsed: 0:00:51.

  Average training loss: 0.32
  Training epcoh took: 0:00:51

Running Validation...
  Accuracy: 0.81
  Validation Loss: 0.46
  Validation took: 0:00:02

======== Epoch 3 / 4 ========
Training...
  Batch    40  of    241\.    Elapsed: 0:00:08.
  Batch    80  of    241\.    Elapsed: 0:00:17.
  Batch   120  of    241\.    Elapsed: 0:00:25.
  Batch   160  of    241\.    Elapsed: 0:00:34.
  Batch   200  of    241\.    Elapsed: 0:00:42.
  Batch   240  of    241\.    Elapsed: 0:00:51.

  Average training loss: 0.22
  Training epcoh took: 0:00:51

Running Validation...
  Accuracy: 0.82
  Validation Loss: 0.49
  Validation took: 0:00:02

======== Epoch 4 / 4 ========
Training...
  Batch    40  of    241\.    Elapsed: 0:00:08.
  Batch    80  of    241\.    Elapsed: 0:00:17.
  Batch   120  of    241\.    Elapsed: 0:00:25.
  Batch   160  of    241\.    Elapsed: 0:00:34.
  Batch   200  of    241\.    Elapsed: 0:00:42.
  Batch   240  of    241\.    Elapsed: 0:00:51.

  Average training loss: 0.16
  Training epcoh took: 0:00:51

Running Validation...
  Accuracy: 0.82
  Validation Loss: 0.55
  Validation took: 0:00:02

Training complete!
Total training took 0:03:30 (h:mm:ss)

让我们查看训练过程的摘要。

import pandas as pd

# Display floats with two decimal places.
pd.set_option('precision', 2)

# Create a DataFrame from our training statistics.
df_stats = pd.DataFrame(data=training_stats)

# Use the 'epoch' as the row index.
df_stats = df_stats.set_index('epoch')

# A hack to force the column headers to wrap.
#df = df.style.set_table_styles([dict(selector="th",props=[('max-width', '70px')])])

# Display the table.
df_stats

训练损失 有效。失利 有效。准确 训练时间 验证时间
时代
--- --- --- --- --- ---
1个 0.50 0.45 0.80 0:00:51 0:00:02
2 0.32 0.46 0.81 0:00:51 0:00:02
3 0.22 0.49 0.82 0:00:51 0:00:02
4 0.16 0.55 0.82 0:00:51 0:00:02

请注意,虽然训练损失在每个时期都在下降,但验证损失却在增加!这表明我们对模型的训练时间过长,并且过度拟合了训练数据。

(作为参考,我们使用了7,695个训练样本和856个验证样本)。

验证损失比准确度更精确,因为准确度我们不在乎确切的输出值,而只是在阈值的哪一边。

如果我们预测的是正确答案,但置信度较低,那么验证损失将抓住这一点,而准确性将无法解决。

import matplotlib.pyplot as plt
% matplotlib inline

import seaborn as sns

# Use plot styling from seaborn.
sns.set(style='darkgrid')

# Increase the plot size and font size.
sns.set(font_scale=1.5)
plt.rcParams["figure.figsize"] = (12,6)

# Plot the learning curve.
plt.plot(df_stats['Training Loss'], 'b-o', label="Training")
plt.plot(df_stats['Valid. Loss'], 'g-o', label="Validation")

# Label the plot.
plt.title("Training & Validation Loss")
plt.xlabel("Epoch")
plt.ylabel("Loss")
plt.legend()
plt.xticks([1, 2, 3, 4])

plt.show()

5 测试仪的性能

现在,我们将像训练集一样加载保持数据集并准备输入。然后,我们将使用马修(Matthew)的相关系数来评估预测,因为这是更广泛的NLP社区用来评估CoLA表现的指标。使用此度量标准,+ 1是最佳分数,而-1是最差分数。这样,我们可以看到我们在针对特定任务的最新模型方面的表现如何。

5.1。资料准备

我们需要采用与训练数据相同的所有步骤来准备测试数据集。

import pandas as pd

# Load the dataset into a pandas dataframe.
df = pd.read_csv("./cola_public/raw/out_of_domain_dev.tsv", delimiter='\t', header=None, names=['sentence_source', 'label', 'label_notes', 'sentence'])

# Report the number of sentences.
print('Number of test sentences: {:,}\n'.format(df.shape[0]))

# Create sentence and label lists
sentences = df.sentence.values
labels = df.label.values

# Tokenize all of the sentences and map the tokens to thier word IDs.
input_ids = []
attention_masks = []

# For every sentence...
for sent in sentences:
    # `encode_plus` will:
    #   (1) Tokenize the sentence.
    #   (2) Prepend the `[CLS]` token to the start.
    #   (3) Append the `[SEP]` token to the end.
    #   (4) Map tokens to their IDs.
    #   (5) Pad or truncate the sentence to `max_length`
    #   (6) Create attention masks for [PAD] tokens.
    encoded_dict = tokenizer.encode_plus(
                        sent,                      # Sentence to encode.
                        add_special_tokens = True, # Add '[CLS]' and '[SEP]'
                        max_length = 64,           # Pad & truncate all sentences.
                        pad_to_max_length = True,
                        return_attention_mask = True,   # Construct attn. masks.
                        return_tensors = 'pt',     # Return pytorch tensors.
                   )

    # Add the encoded sentence to the list.    
    input_ids.append(encoded_dict['input_ids'])

    # And its attention mask (simply differentiates padding from non-padding).
    attention_masks.append(encoded_dict['attention_mask'])

# Convert the lists into tensors.
input_ids = torch.cat(input_ids, dim=0)
attention_masks = torch.cat(attention_masks, dim=0)
labels = torch.tensor(labels)

# Set the batch size.  
batch_size = 32  

# Create the DataLoader.
prediction_data = TensorDataset(input_ids, attention_masks, labels)
prediction_sampler = SequentialSampler(prediction_data)
prediction_dataloader = DataLoader(prediction_data, sampler=prediction_sampler, batch_size=batch_size)

Number of test sentences: 516

5.2。评估测试集

准备好测试集后,我们可以应用经过微调的模型对测试集生成预测。

# Prediction on test set

print('Predicting labels for {:,} test sentences...'.format(len(input_ids)))

# Put model in evaluation mode
model.eval()

# Tracking variables 
predictions , true_labels = [], []

# Predict 
for batch in prediction_dataloader:
  # Add batch to GPU
  batch = tuple(t.to(device) for t in batch)

  # Unpack the inputs from our dataloader
  b_input_ids, b_input_mask, b_labels = batch

  # Telling the model not to compute or store gradients, saving memory and 
  # speeding up prediction
  with torch.no_grad():
      # Forward pass, calculate logit predictions
      outputs = model(b_input_ids, token_type_ids=None, 
                      attention_mask=b_input_mask)

  logits = outputs[0]

  # Move logits and labels to CPU
  logits = logits.detach().cpu().numpy()
  label_ids = b_labels.to('cpu').numpy()

  # Store predictions and true labels
  predictions.append(logits)
  true_labels.append(label_ids)

print('    DONE.')

Predicting labels for 516 test sentences...
    DONE.

使用“ 马修斯相关系数 ”(MCC)衡量CoLA基准的准确性。

我们在这里使用MCC,因为这些类是不平衡的:

print('Positive samples: %d of %d (%.2f%%)' % (df.label.sum(), len(df.label), (df.label.sum() / len(df.label) * 100.0)))

Positive samples: 354 of 516 (68.60%)

from sklearn.metrics import matthews_corrcoef

matthews_set = []

# Evaluate each test batch using Matthew's correlation coefficient
print('Calculating Matthews Corr. Coef. for each batch...')

# For each input batch...
for i in range(len(true_labels)):

  # The predictions for this batch are a 2-column ndarray (one column for "0" 
  # and one column for "1"). Pick the label with the highest value and turn this
  # in to a list of 0s and 1s.
  pred_labels_i = np.argmax(predictions[i], axis=1).flatten()

  # Calculate and store the coef for this batch.  
  matthews = matthews_corrcoef(true_labels[i], pred_labels_i)                
  matthews_set.append(matthews)

Calculating Matthews Corr. Coef. for each batch...

/usr/local/lib/python3.6/dist-packages/sklearn/metrics/_classification.py:900: RuntimeWarning: invalid value encountered in double_scalars
  mcc = cov_ytyp / np.sqrt(cov_ytyt * cov_ypyp)

最终分数将基于整个测试集,但让我们看一下各个批次的分数,以了解批次之间指标的可变性。

每个批次中都有32个句子,最后一个批次中只有(516%32)= 4个测试句子。

# Create a barplot showing the MCC score for each batch of test samples.
ax = sns.barplot(x=list(range(len(matthews_set))), y=matthews_set, ci=None)

plt.title('MCC Score per Batch')
plt.ylabel('MCC Score (-1 to +1)')
plt.xlabel('Batch #')

plt.show()

[图片上传失败...(image-a2f566-1597817340111)]

现在,我们将合并所有批次的结果,并计算最终的MCC得分。

# Combine the results across all batches. 
flat_predictions = np.concatenate(predictions, axis=0)

# For each sample, pick the label (0 or 1) with the higher score.
flat_predictions = np.argmax(flat_predictions, axis=1).flatten()

# Combine the correct labels for each batch into a single list.
flat_true_labels = np.concatenate(true_labels, axis=0)

# Calculate the MCC
mcc = matthews_corrcoef(flat_true_labels, flat_predictions)

print('Total MCC: %.3f' % mcc)

Total MCC: 0.498

凉!在大约半小时内,无需进行任何超参数调整(调整学习速率,时期,批处理大小,ADAM属性等),我们就能获得良好的成绩。

注意:为了获得最高分,我们应该删除“验证集”(我们用来帮助确定要训练多少个纪元),并在整个训练集中进行训练。

该库文件,对于这个基准测试的预期准确性这里作为49.23

您也可以在此处查看官方排行榜。

请注意(由于数据集的大小?),每次运行之间的准确性可能会有很大差异。

结论

这篇文章演示了使用预训练的BERT模型,您可以使用pytorch界面以最少的工作量和最少的训练时间快速有效地创建高质量模型,而无需关注您感兴趣的特定NLP任务。

附录

A1。保存和加载微调模型

第一个单元(从run_glue.py 此处获取)将模型和令牌生成器写入磁盘。

import os

# Saving best-practices: if you use defaults names for the model, you can reload it using from_pretrained()

output_dir = './model_save/'

# Create output directory if needed
if not os.path.exists(output_dir):
    os.makedirs(output_dir)

print("Saving model to %s" % output_dir)

# Save a trained model, configuration and tokenizer using `save_pretrained()`.
# They can then be reloaded using `from_pretrained()`
model_to_save = model.module if hasattr(model, 'module') else model  # Take care of distributed/parallel training
model_to_save.save_pretrained(output_dir)
tokenizer.save_pretrained(output_dir)

# Good practice: save your training arguments together with the trained model
# torch.save(args, os.path.join(output_dir, 'training_args.bin'))

Saving model to ./model_save/

('./model_save/vocab.txt',
 './model_save/special_tokens_map.json',
 './model_save/added_tokens.json')

出于好奇,让我们检查一下文件大小。

!ls -l --block-size=K ./model_save/

total 427960K
-rw-r--r-- 1 root root      2K Mar 18 15:53 config.json
-rw-r--r-- 1 root root 427719K Mar 18 15:53 pytorch_model.bin
-rw-r--r-- 1 root root      1K Mar 18 15:53 special_tokens_map.json
-rw-r--r-- 1 root root      1K Mar 18 15:53 tokenizer_config.json
-rw-r--r-- 1 root root    227K Mar 18 15:53 vocab.txt

最大的文件是模型权重,大约为418 MB。

!ls -l --block-size=M ./model_save/pytorch_model.bin

-rw-r--r-- 1 root root 418M Mar 18 15:53 ./model_save/pytorch_model.bin

要在整个Colab Notebook会话中保存模型,请将其下载到本地计算机,或者理想情况下将其复制到Google云端硬盘。

# Mount Google Drive to this Notebook instance.
from google.colab import drive
    drive.mount('/content/drive')

# Copy the model files to a directory in your Google Drive.
!cp -r ./model_save/ "./drive/Shared drives/ChrisMcCormick.AI/Blog Posts/BERT Fine-Tuning/"

以下功能将从磁盘重新加载模型。

# Load a trained model and vocabulary that you have fine-tuned
model = model_class.from_pretrained(output_dir)
tokenizer = tokenizer_class.from_pretrained(output_dir)

# Copy the model to the GPU.
model.to(device)

A2。权重衰减

令人着迷的示例包括以下用于启用权重衰减的代码块,但是默认衰减率是“ 0.0”,因此我将其移至附录。

该块实质上告诉优化器不要对偏差项应用权重衰减(例如,等式中的)。权重衰减是正则化的一种形式-计算完梯度后,我们将其乘以例如0.99。

# This code is taken from:
# https://github.com/huggingface/transformers/blob/5bfcd0485ece086ebcbed2d008813037968a9e58/examples/run_glue.py#L102

# Don't apply weight decay to any parameters whose names include these tokens.
# (Here, the BERT doesn't have `gamma` or `beta` parameters, only `bias` terms)
no_decay = ['bias', 'LayerNorm.weight']

# Separate the `weight` parameters from the `bias` parameters. 
# - For the `weight` parameters, this specifies a 'weight_decay_rate' of 0.01\. 
# - For the `bias` parameters, the 'weight_decay_rate' is 0.0\. 
optimizer_grouped_parameters = [
    # Filter for all parameters which *don't* include 'bias', 'gamma', 'beta'.
    {'params': [p for n, p in param_optimizer if not any(nd in n for nd in no_decay)],
     'weight_decay_rate': 0.1},

    # Filter for parameters which *do* include those.
    {'params': [p for n, p in param_optimizer if any(nd in n for nd in no_decay)],
     'weight_decay_rate': 0.0}
]

# Note - `optimizer_grouped_parameters` only includes the parameter values, not 
# the names.

修订记录

第3版 - 2020年**3 **月18日 -(当前)

  • 利用该tokenizer.encode_plus功能简化了令牌化和输入格式(用于训练和测试)。 encode_plus处理填充物为我们创建注意口罩。
  • 改进了对注意口罩的解释。
  • 切换到torch.utils.data.random_split用于创建训练验证拆分的。
  • 添加了训练统计信息的摘要表(验证损失,每个时期的时间等)。
  • 在学习曲线图中增加了验证损失,因此我们可以查看是否过度拟合。
    • 感谢Stas Bekman所做的贡献!
  • 将每个批次的MCC显示为条形图。

版本2 - 二○一九年十二月二十〇日 - 链接

  • 拥抱的人将他们的图书馆更名为transformers
  • 更新了笔记本以使用transformers库。

第1版 - 20197月22日

  • 初始版本。

进一步的工作

  • 将MCC得分用于“验证准确性”可能更有意义,但我省略了它,以便不必在笔记本中更早地解释它。
  • 播种–我不相信在训练循环开始时设置种子值实际上会产生可重复的结果…
  • MCC分数在不同的运行中似乎有很大差异。多次运行此示例并显示差异会很有趣。

引用

克里斯·麦考密克(Chris McCormick)和尼克·瑞安(Nick Ryan)。(2019年7月22日)。带有PyTorch的BERT微调教程。取自http://www.mccormickml.com/

你可能感兴趣的:(PyTorch的BERT微调)