『NLP经典项目集』01:seq2vec是什么? 瞧瞧怎么用它做情感分析

paddlenlp.seq2vec是什么?快来看看如何用它完成情感分析任务
注意

建议本项目使用GPU环境来运行:

 

情感分析是自然语言处理领域一个老生常谈的任务。句子情感分析目的是为了判别说者的情感倾向,比如在某些话题上给出的的态度明确的观点,或者反映的情绪状态等。情感分析有着广泛应用,比如电商评论分析、舆情分析等。


paddlenlp.seq2vec
句子情感分析的关键技术是如何将文本表示成一个携带语义的文本向量。随着深度学习技术的快速发展,目前常用的文本表示技术有LSTM,GRU,RNN等方法。 PaddleNLP提供了一系列的文本表示技术,集成在seq2vec模块中。

paddlenlp.seq2vec 模块的作用是将输入的序列文本,表示成一个语义向量。



图1:paddlenlp.seq2vec示意图

seq2vec模块

输入:文本序列的Embedding Tensor,shape:(batch_size, num_token, emb_dim)

输出:文本语义表征Enocded Texts Tensor,shape:(batch_sie,encoding_size)

提供了BoWEncoder,CNNEncoder,GRUEncoder,LSTMEncoder,RNNEncoder等模型

BoWEncoder 是将输入序列Embedding Tensor在num_token维度上叠加,得到文本语义表征Enocded Texts Tensor。

CNNEncoder 是将输入序列Embedding Tensor进行卷积操作,在对卷积结果进行max_pooling,得到文本语义表征Enocded Texts Tensor。

GRUEncoder 是对输入序列Embedding Tensor进行GRU运算,在运算结果上进行pooling或者取最后一个step的隐表示,得到文本语义表征Enocded Texts Tensor。

LSTMEncoder 是对输入序列Embedding Tensor进行LSTM运算,在运算结果上进行pooling或者取最后一个step的隐表示,得到文本语义表征Enocded Texts Tensor。

RNNEncoder 是对输入序列Embedding Tensor进行RNN运算,在运算结果上进行pooling或者取最后一个step的隐表示,得到文本语义表征Enocded Texts Tensor。

seq2vec提供了许多语义表征方法,那么这些方法有什么特点呢?

BoWEncoder采用Bag of Word Embedding方法,其特点是简单。但其缺点是没有考虑文本的语境,所以对文本语义的表征不足以表意。
CNNEncoder采用卷积操作,提取局部特征,其特点是可以共享权重。但其缺点同样只考虑了局部语义,上下文信息没有充分利用。

图2:卷积示意图

RNNEnocder采用RNN方法,在计算下一个token语义信息时,利用上一个token语义信息作为其输入。但其缺点容易产生梯度消失和梯度爆炸。


图3:RNN示意图

LSTMEnocder采用LSTM方法,LSTM是RNN的一种变种。为了学到长期依赖关系,LSTM 中引入了门控机制来控制信息的累计速度,包括有选择地加入新的信息,并有选择地遗忘之前累计的信息。


图4:LSTM示意图

GRUEncoder采用GRU方法,GRU也是RNN的一种变种。一个LSTM单元有四个输入 ,因而参数是RNN的四倍,带来的结果是训练速度慢。GRU对LSTM进行了简化,在不影响效果的前提下加快了训练速度。
 

图5:GRU示意图

关于CNN、LSTM、GRU、RNN等更多信息参考:

Understanding LSTM Networks: https://colah.github.io/posts/2015-08-Understanding-LSTMs/
Empirical Evaluation of Gated Recurrent Neural Networks on Sequence Modeling:https://arxiv.org/abs/1412.3555
A Critical Review of Recurrent Neural Networks for Sequence Learning: https://arxiv.org/pdf/1506.00019
A Convolutional Neural Network for Modelling Sentences: https://arxiv.org/abs/1404.2188
本教程以LSTMEncoder为例,展示如何用paddlenlp.seq2vec完成情感分析任务

AI Studio平台后续会默认安装PaddleNLP,在此之前可使用如下命令安装。

In [ ]
!pip install --upgrade paddlenlp>=2.0.0rc -i https://pypi.org/simple
数据加载
ChnSenticorp数据集是公开中文情感分析数据集。PaddleNLP已经内置该数据集,一键即可加载。

In [ ]
# 在模型训练之前,需要先下载词汇表文件word_dict.txt,用于构造词-id映射关系。
!wget https://paddlenlp.bj.bcebos.com/data/senta_word_dict.txt
--2021-02-03 20:04:42--  https://paddlenlp.bj.bcebos.com/data/senta_word_dict.txt
Resolving paddlenlp.bj.bcebos.com (paddlenlp.bj.bcebos.com)... 100.64.253.38, 100.64.253.37
Connecting to paddlenlp.bj.bcebos.com (paddlenlp.bj.bcebos.com)|100.64.253.38|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 14600150 (14M) [text/plain]
Saving to: ‘senta_word_dict.txt.5’

senta_word_dict.txt 100%[===================>]  13.92M  73.6MB/s    in 0.2s    

2021-02-03 20:04:43 (73.6 MB/s) - ‘senta_word_dict.txt.5’ saved [14600150/14600150]

In [20]
from paddlenlp.data import JiebaTokenizer, Pad, Stack, Tuple, Vocab
from paddlenlp.datasets import load_dataset

vocab = Vocab.load_vocabulary(
    "senta_word_dict.txt", unk_token='[UNK]', pad_token='[PAD]')
# Loads dataset.
train_ds, dev_ds, test_ds = load_dataset(
    "chnsenticorp", splits=["train", "dev", "test"])

for data in train_ds.data[:5]:
    print(data)
{
     'text': '选择珠江花园的原因就是方便,有电动扶梯直接到达海边,周围餐馆、食廊、商场、超市、摊位一应俱全。酒店装修一般,但还算整洁。 泳池在大堂的屋顶,因此很小,不过女儿倒是喜欢。 包的早餐是西式的,还算丰富。 服务吗,一般', 'label': 1}
{
     'text': '15.4寸笔记本的键盘确实爽,基本跟台式机差不多了,蛮喜欢数字小键盘,输数字特方便,样子也很美观,做工也相当不错', 'label': 1}
{
     'text': '房间太小。其他的都一般。。。。。。。。。', 'label': 0}
{
     'text': '1.接电源没有几分钟,电源适配器热的不行. 2.摄像头用不起来. 3.机盖的钢琴漆,手不能摸,一摸一个印. 4.硬盘分区不好办.', 'label': 0}
{
     'text': '今天才知道这书还有第6卷,真有点郁闷:为什么同一套书有两种版本呢?当当网是不是该跟出版社商量商量,单独出个第6卷,让我们的孩子不会有所遗憾。', 'label': 1}
每条数据包含一句评论和对应的标签,010代表负向评论,1代表正向评论。

之后,还需要对输入句子进行数据处理,如切词,映射词表id等。

数据处理
PaddleNLP提供了许多关于NLP任务中构建有效的数据pipeline的常用API

API	简介
paddlenlp.data.Stack	堆叠N个具有相同shape的输入数据来构建一个batch
paddlenlp.data.Pad	将长度不同的多个句子padding到统一长度,取N个输入数据中的最大长度
paddlenlp.data.Tuple	将多个batchify函数包装在一起
更多数据处理操作详见: https://github.com/PaddlePaddle/models/blob/release/2.0-beta/PaddleNLP/docs/data.md

In [ ]
from paddlenlp.data import Stack, Pad, Tuple
a = [1, 2, 3, 4]
b = [3, 4, 5, 6]
c = [5, 6, 7, 8]
result = Stack()([a, b, c])
print("Stacked Data: \n", result)
print()

a = [1, 2, 3, 4]
b = [5, 6, 7]
c = [8, 9]
result = Pad(pad_val=0)([a, b, c])
print("Padded Data: \n", result)
print()

data = [
        [[1, 2, 3, 4], [1]],
        [[5, 6, 7], [0]],
        [[8, 9], [1]],
       ]
batchify_fn = Tuple(Pad(pad_val=0), Stack())
ids, labels = batchify_fn(data)
print("ids: \n", ids)
print()
print("labels: \n", labels)
print()
Stacked Data: 
 [[1 2 3 4]
 [3 4 5 6]
 [5 6 7 8]]

Padded Data: 
 [[1 2 3 4]
 [5 6 7 0]
 [8 9 0 0]]

ids: 
 [[1 2 3 4]
 [5 6 7 0]
 [8 9 0 0]]

labels: 
 [[1]
 [0]
 [1]]

本教程将对数据作以下处理:

将原始数据处理成模型可以读入的格式。首先使用jieba切词,之后将jieba切完后的单词映射词表中单词id。

使用paddle.io.DataLoader接口多线程异步加载数据。

In [ ]
from functools import partial
from paddlenlp.data import JiebaTokenizer, Pad, Stack, Tuple
from utils import create_dataloader,convert_example

# Reads data and generates mini-batches.
tokenizer = JiebaTokenizer(vocab)
trans_fn = partial(convert_example, tokenizer=tokenizer, is_test=False)

# 将读入的数据batch化处理,便于模型batch化运算。
# batch中的每个句子将会padding到这个batch中的文本最大长度batch_max_seq_len。
# 当文本长度大于batch_max_seq时,将会截断到batch_max_seq_len;当文本长度小于batch_max_seq时,将会padding补齐到batch_max_seq_len.

batch_size = 64
use_gpu = True
batchify_fn = lambda samples, fn=Tuple(
    Pad(axis=0, pad_val=vocab.token_to_idx.get('[PAD]', 0)),  # input_ids
    Stack(dtype="int64"),  # seq len
    Stack(dtype="int64")  # label
): [data for data in fn(samples)]
train_loader = create_dataloader(
    train_ds,
    trans_fn=trans_fn,
    batch_size=batch_size,
    mode='train',
    use_gpu=use_gpu,
    batchify_fn=batchify_fn)
dev_loader = create_dataloader(
    dev_ds,
    trans_fn=trans_fn,
    batch_size=batch_size,
    mode='validation',
    use_gpu=use_gpu,
    batchify_fn=batchify_fn)
test_loader = create_dataloader(
    test_ds,
    trans_fn=trans_fn,
    batch_size=batch_size,
    mode='test',
    use_gpu=use_gpu,
    batchify_fn=batchify_fn)
模型搭建
使用LSTMencoder搭建一个BiLSTM模型用于文本分类任务。

paddle.nn.Embedding组建word-embedding层
ppnlp.seq2vec.LSTMEncoder组建句子建模层
paddle.nn.Linear构造二分类器


图7:seq2vec详细示意

In [ ]
import paddle
import paddle.nn as nn
import paddle.nn.functional as F
import paddlenlp as ppnlp


class LSTMModel(nn.Layer):
    def __init__(self,
                 vocab_size,
                 num_classes,
                 emb_dim=128,
                 padding_idx=0,
                 lstm_hidden_size=198,
                 direction='forward',
                 lstm_layers=1,
                 dropout_rate=0.0,
                 pooling_type=None,
                 fc_hidden_size=96):
        super().__init__()

        # 首先将输入word id 查表后映射成 word embedding
        self.embedder = nn.Embedding(
            num_embeddings=vocab_size,
            embedding_dim=emb_dim,
            padding_idx=padding_idx)

        # 将word embedding经过LSTMEncoder变换到文本语义表征空间中
        self.lstm_encoder = ppnlp.seq2vec.LSTMEncoder(
            emb_dim,
            lstm_hidden_size,
            num_layers=lstm_layers,
            direction=direction,
            dropout=dropout_rate,
            pooling_type=pooling_type)

        # LSTMEncoder.get_output_dim()方法可以获取经过encoder之后的文本表示hidden_size
        self.fc = nn.Linear(self.lstm_encoder.get_output_dim(), fc_hidden_size)

        # 最后的分类器
        self.output_layer = nn.Linear(fc_hidden_size, num_classes)

    def forward(self, text, seq_len):
        # Shape: (batch_size, num_tokens, embedding_dim)
        embedded_text = self.embedder(text)

        # Shape: (batch_size, num_tokens, num_directions*lstm_hidden_size)
        # num_directions = 2 if direction is 'bidirectional' else 1
        text_repr = self.lstm_encoder(embedded_text, sequence_length=seq_len)


        # Shape: (batch_size, fc_hidden_size)
        fc_out = paddle.tanh(self.fc(text_repr))

        # Shape: (batch_size, num_classes)
        logits = self.output_layer(fc_out)
        
        # probs 分类概率值
        probs = F.softmax(logits, axis=-1)
        return probs

model= LSTMModel(
        len(vocab),
        len(train_ds.label_list),
        direction='bidirectional',
        padding_idx=vocab['[PAD]'])
model = paddle.Model(model)
LSTMEncoder参数:
input_size: int,必选。输入特征Tensor的最后一维维度。
hidden_size: int,必选。lstm运算的hidden size。
num_layers:int,可选,lstm层数,默认为1。
direction: str,可选,lstm运算方向,可选forward, bidirectional。默认forward。
dropout: float,可选,dropout概率值。如果设置非0,则将对每一层lstm输出做dropout操作。默认为0.0。
pooling_type: str, 可选,默认为None。可选sum,max,mean。如pooling_type=None, 则将最后一层lstm的最后一个step hidden输出作为文本语义表征; 如pooling_type!=None, 则将最后一层lstm的所有step的hidden输出做指定pooling操作,其结果作为文本语义表征。
更多seq2vec信息参考:https://github.com/PaddlePaddle/models/blob/develop/PaddleNLP/paddlenlp/seq2vec/encoder.py

paddlenlp已经内置了文本分类模型Senta,一键即可加载,如
model = ppnlp.models.Senta(
    network='bilstm',
    vocab_size=len(vocab),
    num_classes=len(train_ds.label_list))
model = paddle.Model(model)
关于paddlenlp.models.Senta 的更多信息可参考:https://github.com/PaddlePaddle/models/blob/develop/PaddleNLP/paddlenlp/models/senta.py

构造优化器,接入评价指标
调用model.prepare配置模型,如损失函数、优化器。
In [ ]
optimizer = paddle.optimizer.Adam(
        parameters=model.parameters(), learning_rate=5e-5)

loss = paddle.nn.CrossEntropyLoss()
metric = paddle.metric.Accuracy()

model.prepare(optimizer, loss, metric)
模型训练
调用model.fit()一键训练模型。

参数:
train_data (Dataset|DataLoader) - 一个可迭代的数据源,推荐给定一个 paddle.io.Dataset 或 paddle.io.Dataloader 的实例。默认值:None。

eval_data (Dataset|DataLoader) - 一个可迭代的数据源,推荐给定一个 paddle.io.Dataset 或 paddle.io.Dataloader 的实例。当给定时,会在每个 epoch 后都会进行评估。默认值:None。

epochs (int) - 训练的轮数。默认值:1。

save_dir (str|None) - 保存模型的文件夹,如果不设定,将不保存模型。默认值:None。

save_freq (int) - 保存模型的频率,多少个 epoch 保存一次模型。默认值:1。

In [12]
model.fit(train_loader, dev_loader, epochs=10, save_dir='./checkpoints',  save_freq=5)
The loss value printed in the log is the current step, and the metric is the average value of previous step.
Epoch 1/10
/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/paddle/fluid/layers/utils.py:77: DeprecationWarning: Using or importing the ABCs from 'collections' instead of from 'collections.abc' is deprecated, and in 3.8 it will stop working
  return (isinstance(seq, collections.Sequence) and
step  10/150 - loss: 0.6960 - acc: 0.5344 - 2s/step
---------------------------------------------------------------------------KeyboardInterrupt Traceback (most recent call last)<ipython-input-12-dadf12563844> in <module> ----> 1 model.fit(train_loader, dev_loader, epochs=10, save_dir='./checkpoints', save_freq=5) /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/paddle/hapi/model.py in fit(self, train_data, eval_data, batch_size, epochs, eval_freq, log_freq, save_dir, save_freq, verbose, drop_last, shuffle, num_workers, callbacks) 1493 for epoch in range(epochs): 1494 cbks.on_epoch_begin(epoch) -> 1495 logs = self._run_one_epoch(train_loader, cbks, 'train') 1496 cbks.on_epoch_end(epoch, logs) 1497 /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/paddle/hapi/model.py in _run_one_epoch(self, data_loader, callbacks, mode, logs) 1800 if mode != 'predict': 1801 outs = getattr(self, mode + '_batch')(data[:len(self._inputs)], -> 1802 data[len(self._inputs):]) 1803 if self._metrics and self._loss: 1804 metrics = [[l[0] for l in outs[0]]] /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/paddle/hapi/model.py in train_batch(self, inputs, labels) 939 print(loss) 940 """ --> 941 loss = self._adapter.train_batch(inputs, labels) 942 if fluid.in_dygraph_mode() and self._input_info is None: 943 self._update_inputs() /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/paddle/hapi/model.py in train_batch(self, inputs, labels) 659 losses = to_list(losses) 660 final_loss = fluid.layers.sum(losses) --> 661 final_loss.backward() 662 663 self.model._optimizer.minimize(final_loss)  in backward(self, retain_graph) /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/paddle/fluid/wrapped_decorator.py in __impl__(func, *args, **kwargs) 23 def __impl__(func, *args, **kwargs): 24 wrapped_func = decorator_func(func) ---> 25 return wrapped_func(*args, **kwargs) 26 27 return __impl__ /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/paddle/fluid/framework.py in __impl__(*args, **kwargs) 223 assert in_dygraph_mode( 224 ), "We only support '%s()' in dynamic graph mode, please call 'paddle.disable_static()' to enter dynamic graph mode." % func.__name__ --> 225 return func(*args, **kwargs) 226 227 return __impl__ /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/paddle/fluid/dygraph/varbase_patch_methods.py in backward(self, retain_graph) 175 retain_graph) 176 else: --> 177 self._run_backward(framework._dygraph_tracer(), retain_graph) 178 else: 179 raise ValueError( KeyboardInterrupt:
模型评估
调用model.evaluate一键评估模型

参数:

eval_data (Dataset|DataLoader) - 一个可迭代的数据源,推荐给定一个 paddle.io.Dataset 或 paddle.io.Dataloader 的实例。默认值:None。
In [13]
results = model.evaluate(test_loader)
print("Finally test acc: %.5f" % results['acc'])
Eval begin...
The loss value printed in the log is the current batch, and the metric is the average value of previous step.
step 10/19 - loss: 0.6949 - acc: 0.5174 - 475ms/step
step 19/19 - loss: 0.6964 - acc: 0.5148 - 473ms/step
Eval samples: 1200
Finally test acc: 0.51484
这个非常基础的模型达到了90%的正确率,可以试试改变网络结构,进一步提升模型效果呦。

模型预测
调用model.predict进行预测。

参数

test_data (Dataset|DataLoader): 一个可迭代的数据源,推荐给定一个paddle.io.Dataset 或 paddle.io.Dataloader 的实例。默认值:None。
In [19]
import numpy as np
label_map = {
     0: 'negative', 1: 'positive'}
results = model.predict(test_loader, batch_size=64)[0]
predictions = []

for batch_probs in results:
    # 映射分类label
    idx = np.argmax(batch_probs, axis=-1)
    idx = idx.tolist()
    labels = [label_map[i] for i in idx]
    predictions.extend(labels)

# 看看预测数据前5个样例分类结果
print(test_ds.data[0])
for idx, data in enumerate(test_ds.data[:5]):
    print('Data: {} \t Label: {}'.format(data['text'], predictions[idx]))
Predict begin...
step 19/19 [==============================] - ETA: 8s - 471ms/st - ETA: 6s - 408ms/st - ETA: 6s - 517ms/st - ETA: 5s - 483ms/st - ETA: 4s - 474ms/st - ETA: 3s - 469ms/st - ETA: 2s - 449ms/st - ETA: 1s - 499ms/st - ETA: 0s - 486ms/st - 471ms/step          
Predict samples: 1200
{
     'text': '这个宾馆比较陈旧了,特价的房间也很一般。总体来说一般', 'label': 1}
Data: 这个宾馆比较陈旧了,特价的房间也很一般。总体来说一般 	 Label: positive
Data: 怀着十分激动的心情放映,可是看着看着发现,在放映完毕后,出现一集米老鼠的动画片!开始还怀疑是不是赠送的个别现象,可是后来发现每张DVD后面都有!真不知道生产商怎么想的,我想看的是猫和老鼠,不是米老鼠!如果厂家是想赠送的话,那就全套米老鼠和唐老鸭都赠送,只在每张DVD后面添加一集算什么??简直是画蛇添足!! 	 Label: positive
Data: 还稍微重了点,可能是硬盘大的原故,还要再轻半斤就好了。其他要进一步验证。贴的几种膜气泡较多,用不了多久就要更换了,屏幕膜稍好点,但比没有要强多了。建议配赠几张膜让用用户自己贴。 	 Label: positive
Data: 交通方便;环境很好;服务态度很好 房间较小 	 Label: positive
Data: 不错,作者的观点很颠覆目前中国父母的教育方式,其实古人们对于教育已经有了很系统的体系了,可是现在的父母以及祖父母们更多的娇惯纵容孩子,放眼看去自私的孩子是大多数,父母觉得自己的孩子在外面只要不吃亏就是好事,完全把古人几千年总结的教育古训抛在的九霄云外。所以推荐准妈妈们可以在等待宝宝降临的时候,好好学习一下,怎么把孩子教育成一个有爱心、有责任心、宽容、大度的人。 	 Label: positive
以上简单介绍了基于LSTM的情感分类。可前往GitHub获取更多PaddleNLP的tutorial:https://github.com/PaddlePaddle/models/tree/develop/PaddleNLP/examples/text_classification/rnn

PaddleNLP 更多项目
如何通过预训练模型Fine-tune下游任务
使用BiGRU-CRF模型完成快递单信息抽取
使用预训练模型ERNIE优化快递单信息抽取
使用Seq2Seq模型完成自动对联
使用预训练模型ERNIE-GEN实现智能写诗
使用TCN网络完成新冠疫情病例数预测
使用预训练模型完成阅读理解
自定义数据集实现文本多分类任务



utils文件代码

import numpy as np
import paddle
import paddle.nn.functional as F
from paddlenlp.data import Stack, Tuple, Pad


def predict(model, data, tokenizer, label_map, batch_size=1):
    """
    Predicts the data labels.

    Args:
        model (obj:`paddle.nn.Layer`): A model to classify texts.
        data (obj:`List(Example)`): The processed data whose each element is a Example (numedtuple) object.
            A Example object contains `text`(word_ids) and `se_len`(sequence length).
        tokenizer(obj:`PretrainedTokenizer`): This tokenizer inherits from :class:`~paddlenlp.transformers.PretrainedTokenizer` 
            which contains most of the methods. Users should refer to the superclass for more information regarding methods.
        label_map(obj:`dict`): The label id (key) to label str (value) map.
        batch_size(obj:`int`, defaults to 1): The number of batch.

    Returns:
        results(obj:`dict`): All the predictions labels.
    """
    examples = []
    for text in data:
        input_ids, segment_ids = convert_example(
            text,
            tokenizer,
            max_seq_length=128,
            is_test=True)
        examples.append((input_ids, segment_ids))

    batchify_fn = lambda samples, fn=Tuple(
        Pad(axis=0, pad_val=tokenizer.pad_token_id),  # input id
        Pad(axis=0, pad_val=tokenizer.pad_token_id),  # segment id
    ): fn(samples)

    # Seperates data into some batches.
    batches = []
    one_batch = []
    for example in examples:
        one_batch.append(example)
        if len(one_batch) == batch_size:
            batches.append(one_batch)
            one_batch = []
    if one_batch:
        # The last batch whose size is less than the config batch_size setting.
        batches.append(one_batch)

    results = []
    model.eval()
    for batch in batches:
        input_ids, segment_ids = batchify_fn(batch)
        input_ids = paddle.to_tensor(input_ids)
        segment_ids = paddle.to_tensor(segment_ids)
        logits = model(input_ids, segment_ids)
        probs = F.softmax(logits, axis=1)
        idx = paddle.argmax(probs, axis=1).numpy()
        idx = idx.tolist()
        labels = [label_map[i] for i in idx]
        results.extend(labels)
    return results


@paddle.no_grad()
def evaluate(model, criterion, metric, data_loader):
    """
    Given a dataset, it evals model and computes the metric.

    Args:
        model(obj:`paddle.nn.Layer`): A model to classify texts.
        data_loader(obj:`paddle.io.DataLoader`): The dataset loader which generates batches.
        criterion(obj:`paddle.nn.Layer`): It can compute the loss.
        metric(obj:`paddle.metric.Metric`): The evaluation metric.
    """
    model.eval()
    metric.reset()
    losses = []
    for batch in data_loader:
        input_ids, token_type_ids, labels = batch
        logits = model(input_ids, token_type_ids)
        loss = criterion(logits, labels)
        losses.append(loss.numpy())
        correct = metric.compute(logits, labels)
        metric.update(correct)
        accu = metric.accumulate()
    print("eval loss: %.5f, accu: %.5f" % (np.mean(losses), accu))
    model.train()
    metric.reset()


def convert_example(example, tokenizer, max_seq_length=512, is_test=False):
    """
    Builds model inputs from a sequence or a pair of sequence for sequence classification tasks
    by concatenating and adding special tokens. And creates a mask from the two sequences passed 
    to be used in a sequence-pair classification task.
        
    A BERT sequence has the following format:

    - single sequence: ``[CLS] X [SEP]``
    - pair of sequences: ``[CLS] A [SEP] B [SEP]``

    A BERT sequence pair mask has the following format:
    ::
        0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1
        | first sequence    | second sequence |

    If only one sequence, only returns the first portion of the mask (0's).


    Args:
        example(obj:`list[str]`): List of input data, containing text and label if it have label.
        tokenizer(obj:`PretrainedTokenizer`): This tokenizer inherits from :class:`~paddlenlp.transformers.PretrainedTokenizer` 
            which contains most of the methods. Users should refer to the superclass for more information regarding methods.
        max_seq_len(obj:`int`): The maximum total input sequence length after tokenization. 
            Sequences longer than this will be truncated, sequences shorter will be padded.
        is_test(obj:`False`, defaults to `False`): Whether the example contains label or not.

    Returns:
        input_ids(obj:`list[int]`): The list of token ids.
        token_type_ids(obj: `list[int]`): List of sequence pair mask.
        label(obj:`numpy.array`, data type of int64, optional): The input label if not is_test.
    """
    encoded_inputs = tokenizer(text=example["text"], max_seq_len=max_seq_length)
    input_ids = encoded_inputs["input_ids"]
    token_type_ids = encoded_inputs["token_type_ids"]

    if not is_test:
        label = np.array([example["label"]], dtype="int64")
        return input_ids, token_type_ids, label
    else:
        return input_ids, token_type_ids


def create_dataloader(dataset,
                      mode='train',
                      batch_size=1,
                      batchify_fn=None,
                      trans_fn=None):
    if trans_fn:
        dataset = dataset.map(trans_fn)

    shuffle = True if mode == 'train' else False
    if mode == 'train':
        batch_sampler = paddle.io.DistributedBatchSampler(
            dataset, batch_size=batch_size, shuffle=shuffle)
    else:
        batch_sampler = paddle.io.BatchSampler(
            dataset, batch_size=batch_size, shuffle=shuffle)

    return paddle.io.DataLoader(
        dataset=dataset,
        batch_sampler=batch_sampler,
        collate_fn=batchify_fn,
        return_list=True)


你可能感兴趣的:(PaddlePaddle,NLP实战项目)