Datawhale组队学习NLP_Bert序列标注学习笔记

本文为学习Datawhale 2021.8组队学习NLP入门之Transformer笔记
原学习文档地址:https://github.com/datawhalechina/learn-nlp-with-transformers

1 数据的读入

from datasets import load_dataset, load_metric
datasets = load_dataset("conll2003")

会由于网络原因造成error,可以使用colab下载,再保存到google云盘上后下载下来本地导入

from datasets import load_from_disk
datasets = load_from_disk("./datasets/conll2003")

datasets对象下有train,eval.test,各自下还有很多属性,比如标签的类别名等等

# 获取标签名
label_list = datasets["train"].features[f"{task}_tags"].feature.names

2 预处理数据

2.1 Word->Subword->tokens

  • Tokenizer工具将word->subword->tokens
  • Tokenizer使用的模型要和后面训练使用的保持一致
  • 既可以对单个文本进行处理,也可以对分好词的文本进行处理
tokenizer("Hello, this is one sentence!")
tokenizer(["Hello", ",", "this", "is", "one", "sentence", "split", "into", "words", "."], is_split_into_words=True)
  • 举个例子
# 获取要处理的词
example = datasets["train"][4]
# 将词->tokens
tokenized_input = tokenizer(example["tokens"], is_split_into_words=True)
# tokens->词
tokens = tokenizer.convert_ids_to_tokens(tokenized_input["input_ids"])

注意第三步将tokens->词可以发现tokenizer会加入一些特殊符号如[CLS]和[SEP]

2.2 tag对齐

word->subword后,输入可能会变多,所以需要对tags进行对齐,两种方法:

  • 1 word对应的第一个subword设置原来word对应的标签,其他subword的标签设为-100,-100的标签不会计算损失
  • 2 word对应的所有subword均设为原来word对应的标签

tokenizer有一个word_ids方法可以帮助我们解决这个问题,他可以返回tokenizer对应的input单词的下标。

以下代码为进行tokenizer并标签对齐的代码,label_all_tokens 表示选择方法1或2

def tokenize_and_align_labels(examples):
    tokenized_inputs = tokenizer(examples["tokens"], truncation=True, is_split_into_words=True)

    labels = []
    # 这里每个句子还都是按他的长度存储在labels里的
    for i, label in enumerate(examples[f"{task}_tags"]):
        word_ids = tokenized_inputs.word_ids(batch_index=i)
        previous_word_idx = None
        label_ids = []
        for word_idx in word_ids:
            # Special tokens have a word id that is None. We set the label to -100 so they are automatically
            # ignored in the loss function.
            if word_idx is None:
                label_ids.append(-100)
            # We set the label for the first token of each word.
            elif word_idx != previous_word_idx:
                label_ids.append(label[word_idx])
            # For the other tokens in a word, we set the label to either the current label or -100, depending on
            # the label_all_tokens flag.
            else:
                label_ids.append(label[word_idx] if label_all_tokens else -100)
            previous_word_idx = word_idx

        labels.append(label_ids)

    tokenized_inputs["labels"] = labels
    return tokenized_inputs

接下来使用map函数对整个数据集进行上述操作

tokenized_datasets = datasets.map(tokenize_and_align_labels, batched=True)

3 使用Trainer微调预训练模型

from transformers import AutoModelForTokenClassification, TrainingArguments, Trainer

model = AutoModelForTokenClassification.from_pretrained(model_checkpoint, num_labels=len(label_list))

为了能够得到一个Trainer训练工具,我们还需要3个要素,其中最重要的是训练的设定/参数 TrainingArguments。这个训练设定包含了能够定义训练过程的所有属性。

args = TrainingArguments(
    f"test-{task}",
    evaluation_strategy = "epoch",
    learning_rate=2e-5,
    per_device_train_batch_size=batch_size,
    per_device_eval_batch_size=batch_size,
    num_train_epochs=3,
    weight_decay=0.01,
)

设置data_collator ,用于将数据集喂给trainer

from transformers import DataCollatorForTokenClassification

data_collator = DataCollatorForTokenClassification(tokenizer)

设置评估方法

# metric = load_metric("seqeval")  # 可能会出现网络问题,手动打开error中的地址进行下载后导入
metric = load_metric("./datasets/seqeval.py")

seqeval的使用例子

labels = [label_list[i] for i in example[f"{task}_tags"]]
metric.compute(predictions=[labels], references=[labels])

得到评估部分函数

import numpy as np

# 传入的p就是EvalPrediction,包括predictions和labels
def compute_metrics(p):
	# predictions:(3250,146,9) 9对应9种labels
	# labels:(3250,146) 每个batch对应146,因为都是ndarray格式了
    predictions, labels = p  # 这里应该是trainer传入的就是这样
    predictions = np.argmax(predictions, axis=2)

    # Remove ignored index (special tokens)
    # -100都是在label里找的,一个用的是传入的prediction,一个用的是传入的label
    true_predictions = [
        [label_list[p] for (p, l) in zip(prediction, label) if l != -100]
        for prediction, label in zip(predictions, labels)
    ]
    true_labels = [
        [label_list[l] for (p, l) in zip(prediction, label) if l != -100]
        for prediction, label in zip(predictions, labels)
    ]

    results = metric.compute(predictions=true_predictions, references=true_labels)
    return {
        "precision": results["overall_precision"],
        "recall": results["overall_recall"],
        "f1": results["overall_f1"],
        "accuracy": results["overall_accuracy"],
    }
# 将数据/模型/参数传入Trainer即可
trainer = Trainer(
    model,
    args,
    train_dataset=tokenized_datasets["train"],
    eval_dataset=tokenized_datasets["validation"],
    data_collator=data_collator,
    tokenizer=tokenizer,
    compute_metrics=compute_metrics
)

训练

trainer.train()

评估

trainer.evaluate()

你可能感兴趣的:(Datawhale组队学习,bert,自然语言处理,深度学习)