【最新试验】使用BertForTokenClassification做命名实体识别序列标注pytorch版

阅读这篇文章你需要知道什么是bert?

bert几乎时最新最强的预训练模型之一。使用方法很简单,只需要一块gpu,大概8g显存,再取github上找到pytorch transformer这个repo,最后运行里面的run glue.py恭喜你!成功打开新世界大门

 

但是,如何用bert做ner呢?我们现在的run glue只能解决句子分类,而ner相当于词级分类,所以只能自己想怎么搭建模型了。

 

幸好,现在出了新的class,BertForTokenClassicification,这个时用来做ner的模型。如何使用呢?

 

class BertForTokenClassification(BertPreTrainedModel):
    r"""
        **labels**: (`optional`) ``torch.LongTensor`` of shape ``(batch_size, sequence_length)``:
            Labels for computing the token classification loss.
            Indices should be in ``[0, ..., config.num_labels - 1]``.

    Outputs: `Tuple` comprising various elements depending on the configuration (config) and inputs:
        **loss**: (`optional`, returned when ``labels`` is provided) ``torch.FloatTensor`` of shape ``(1,)``:
            Classification loss.
        **scores**: ``torch.FloatTensor`` of shape ``(batch_size, sequence_length, config.num_labels)``
            Classification scores (before SoftMax).
        **hidden_states**: (`optional`, returned when ``config.output_hidden_states=True``)
            list of ``torch.FloatTensor`` (one for the output of each layer + the output of the embeddings)
            of shape ``(batch_size, sequence_length, hidden_size)``:
            Hidden-states of the model at the output of each layer plus the initial embedding outputs.
        **attentions**: (`optional`, returned when ``config.output_attentions=True``)
            list of ``torch.FloatTensor`` (one for each layer) of shape ``(batch_size, num_heads, sequence_length, sequence_length)``:
            Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.

    Examples::


        tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')
        model = BertForTokenClassification.from_pretrained('bert-base-uncased')
        input_ids = torch.tensor(tokenizer.encode("Hello, my dog is cute")).unsqueeze(0)  # Batch size 1
        labels = torch.tensor([1] * input_ids.size(1)).unsqueeze(0)  # Batch size 1
        outputs = model(input_ids, labels=labels)
        loss, scores = outputs[:2]

    """
    def __init__(self, config):
        super(BertForTokenClassification, self).__init__(config)
        self.num_labels = 9#config.num_labels

        self.bert = BertModel(config)
        self.dropout = nn.Dropout(config.hidden_dropout_prob)
        self.classifier = nn.Linear(config.hidden_size, self.num_labels)

        self.apply(self.init_weights)

    def forward(self, input_ids, token_type_ids=None, attention_mask=None, labels=None,
                position_ids=None, head_mask=None):
        outputs = self.bert(input_ids, position_ids=position_ids, token_type_ids=token_type_ids,
                            attention_mask=attention_mask, head_mask=head_mask)
        sequence_output = outputs[0]
        sequence_output = self.dropout(sequence_output)
        logits = self.classifier(sequence_output)

        outputs = (logits,) + outputs[2:]  # add hidden states and attention if they are here
        if labels is not None:
            loss_fct = CrossEntropyLoss()
            # Only keep active parts of the loss
            if attention_mask is not None:
                active_loss = attention_mask.view(-1) == 1
                active_logits = logits.view(-1, self.num_labels)[active_loss]
                active_labels = labels.view(-1)[active_loss]
                loss = loss_fct(active_logits, active_labels)
            else:
                loss = loss_fct(logits.view(-1, self.num_labels), labels.view(-1))
            outputs = (loss,) + outputs

        return outputs  # (loss), scores, (hidden_states), (attentions)

首先载入一个tokenizer,这个比较简单,参数为tokenizer的路径

然后载入一个pretrained模型,这里就是bert 模型,在modelling bert这个文件里有模型的下载网络地址

input_ids 是输入的id,由tokenizer得到

label就是标签了,shape为 batchsize ×seqlength 

现在我们基本知道如何调用这个模型,接下来阅读类的代码构造对应的数据

先看init部分,bert属性代表bert模型,dropout和classifier都是简单的一层模型,dropout会随机丢弃一些神经元,让训练更加稳定,防止过拟合,classifier则是一层普通的线性层,也就是单层神经网络。

 

看forward部分代码,这部分代码会在模型训练和评价用到。首先bert模型得到一段文本的序列特征,shape为

batchsize * seq length * feature(768)。这个向量存放在第一个位置,所以用索引0取出就行了。

然后后面classifier会把输入维度转成标签的种类数,那么输出为batchsize * seq* tag_nums。

最后用crossentropy计算展开后的损失,这一段是基本分类损失的做法,只不过现在一个序列里就会有多个标签。

 

那么构造数据的方法将在下一篇中继续讲解,有需要的同学们可以关注一下我。

你可能感兴趣的:(监督学习,bert,NER)