【文本分类】常见文本分类深度学习模型汇总

在深度学习被广泛用于NLP之前,文本分类任务的常见pipeline为:
(1)人工/半自动特征抽取:包括one-hot编码、count features、TF-IDF、词性/句法信息等等
(2)分类器的构造:包括LR、NB、SVM、Xgboost及模型融合

在DNN模型应用于语言模型大获成功,进而提出各类词向量(如word2vector、fasttext、glove)后,一个自然而然的问题就是:**如何表达句子、甚至整个文档? **,这也是将深度学习模型运用于文本分类时的核心步骤。本文按照句子向量编码的Encoder类型,分CNN、RNN和Self-Attention三部分分别介绍其典型模型。

一、CNN

CNN的卷积核以滑动窗口的形式作用于n-gram,从而提取n-gram中的句法/语义信息。

1.1 TextCNN

在博文曾经从1-D卷积和2-D卷积的角度详细介绍了该模型。这里再重申下该模型的若干要点:

(1)词向量的embedding_dim,对于后续的卷积层/FC层而言可视为input_dim,而句子长度可理解为图片中width/length维度;
(2)TextCNN采取了一组卷积核进行特征的提取;卷积核尺寸一致的可认为其捕捉要点不一样,不同尺寸的卷积核可认为n-gram不一致;
(3)考虑到CNN模型一般要求输入的shape一致,因此样本会统一长度(sequence length);
(4)同样鉴于样本真实长度的不一样,TextCNN通过max-pooling操作在卷积后的各output_dim上沿着sequence length方向取最显著特征。这种操作既有助于对分类任务中重要特征的提取,也规避了卷积后长度缩减(当然也可以通过等齐卷积解决)和文本对齐的MASK问题.

1.2 DPCNN

区别于TextCNN仅含一层卷积层,DPCNN借鉴了图像领域深层卷积的思想,通过精心设计的重复”CNN+Pooling+ShortCut“结构,实现了远超n-gram范围的信息捕捉。
【文本分类】常见文本分类深度学习模型汇总_第1张图片
该模型的dim逐层减小为上一层的一半,酷似金字塔,因此被取名为Deep Pyramid Convolution,其模型核心设计包括:
(1)等齐卷积:DPCNN引入了ShortCut结构以保证深度网络的可学习性,为了使得输入和输出进行skip connection时维度一致,DPCNN采用了等齐卷积;
(2)Region embedding:本质上就是类似于TextCNN一样的n-gram卷积;
(3)ShortCut结构:类似于ResNet结构,直接将input加到output上,从而子模块内部仅需学习残差;
(4)Pooling:通过Pooling进行下采样,从而使得数据在Length Sequence维度不断缩减。
(5)Pre-active:DPCNN选择了先激活再卷积的方式。

示范代码如下:

import torch
import torch.nn as nn
import torch.nn.functional as F
from argparse import Namespace


args = Namespace(
    num_vocab=10,
    embedding_dim=20,
    padding_index=0,
    region_embedding_size=3,             # Region embedding kernel size
    cnn_num_channel=250,
    cnn_kernel_size=3,              # CNN参数应保证为等长卷积
    cnn_padding_size=1,
    cnn_stride=1,
    pooling_size=2,         # MaxPooling应保证resize为原1/2
    num_classes=6,
)


class DPCNN(nn.Module):
    def __init__(self):
        super(DPCNN, self).__init__()
        self.embedding = nn.Embedding(args.num_vocab, args.embedding_dim, padding_idx=args.padding_index)
        self.region_cnn = nn.Conv1d(args.embedding_dim, args.cnn_num_channel, args.region_embedding_size)
        self.padding1 = nn.ConstantPad1d((1, 1), 0)          # region embedding后的对齐
        self.padding2 = nn.ConstantPad1d((0, 1), 0)       # block2中先补齐,先防止信息丢失
        self.relu = nn.ReLU()
        self.cnn = nn.Conv1d(args.cnn_num_channel, args.cnn_num_channel, kernel_size=args.cnn_kernel_size,
                             padding=args.cnn_padding_size, stride=args.cnn_stride)
        self.maxpooling = nn.MaxPool1d(kernel_size=args.pooling_size)
        self.fc = nn.Linear(args.cnn_num_channel, args.num_classes)

    def forward(self, x_in):
        emb = self.embedding(x_in)           # Batch*Sequence*Embedding
        emb = self.region_cnn(emb.transpose(1, 2))          # Batch*Embedding*Sequence——>Batch*Embedding*(Sequence-2)
        emb = self.padding1(emb)         # Batch*Embedding*(Sequence-2)——> Batch*Embedding*Sequence

        # 第一层的卷积+skip-connection
        conv = emb + self._block1(self._block1(emb))

        # 第二层的block+skip-connection
        while conv.size(-1) >= 2:
            conv = self._block2(conv)

        out = self.fc(torch.squeeze(conv, dim=-1))
        return out

    def _block1(self, x):
        return self.cnn(self.relu(x))          # 注意这里是pre-activation

    def _block2(self, x):
        x = self.padding2(x)
        px = self.maxpooling(x)

        x = self.relu(px)
        x = self.cnn(x)

        x = self.relu(x)
        x = self.cnn(x)
        x = px + x
        return x

二、RNN

虽然RNN结构不具备CNN结构那样的并行性,但其更符合文本整体语义的直觉,通过自回归式的构造可以获取每个token的隐藏信息。为了进一步表达句子的语义信息,常见方法有:
(1)最后一个token的隐藏层输出;
(2)所有token处隐藏层的输出,然后沿着sequence length方向取max-pooling或average-pooling;
(3)对最后一个token的隐藏层或所有token处隐藏层的结果进行拼接,再加其它的Encoder。
(4)引入Attention机制。

尽管RNN模型本身能够处理任意长度的文本,但为了mini-batch的可行性,仍需进行padding。但在实际处理时,为了避免这些padding对最终loss的影响,需要做mask处理。

2.1 TextRNN

直接选用RNN模型最后一个token(非padding)处的隐藏层输入作为句子向量。

import torch
import torch.nn as nn
import torch.nn.functional as F
from torch.nn.utils.rnn import pack_padded_sequence
from argparse import Namespace


args = Namespace(
    num_vocab=10,
    embedding_dim=20,
    padding_index=0,
    rnn_hidden_size=10,
    rnn_bidirection=True,
    rnn_layer=2,
    num_classes=6,
    drop_rate=0.5,
)

class TextRNN_Var(nn.Module):
    """
    考虑变长变量的,利用PackedSequence对象,但先排序(适应若干版本对序列的要求)
    """
    def __init__(self):
        super(TextRNN_Var, self).__init__()
        self.embedding = nn.Embedding(args.num_vocab, args.embedding_dim, padding_idx=args.padding_index)
        if args.rnn_bidirection:
            self.rnn = nn.LSTM(args.embedding_dim, args.rnn_hidden_size, bidirectional=True, num_layers=args.rnn_layer, batch_first=True)
            self.fc = nn.Linear(2*args.rnn_hidden_size, args.num_classes)
        else:
            self.rnn = nn.LSTM(args.embedding_dim, args.rnn_hidden_size, bidirectional=False, num_layers=args.rnn_layer, batch_first=True)
            self.fc = nn.Linear(args.rnn_hidden_size, args.num_classes)

    def forward(self, x_in, x_lengths):
        emb = self.embedding(x_in)
        _, idx_sort = torch.sort(x_lengths, dim=0, descending=True)        # 排序后各句子对应的索引
        _, idx_unsort = torch.sort(idx_sort, dim=0, descending=True)      # 原各sequence的长度排序
        emb = torch.index_select(emb, dim=0, index=idx_sort)     # 按照句子长度排序的embedding
        x_lengths = x_lengths[idx_sort].long().cpu().numpy().tolist()
        pack_emb = pack_padded_sequence(emb, lengths=x_lengths, batch_first=True)
        _, (h, _) = self.rnn(pack_emb)       # h: (Layer*Bidirection,Batch, HiddenSize)
        out = torch.cat([h[-1], h[-2]], dim=-1)
        out = F.dropout(out, args.drop_rate)
        out = torch.index_select(out, dim=0, index=idx_unsort)
        out = self.fc(out)
        return out
2.2 TextRNN+Attention

在TextRNN的基础上,引入Attention机制。具体是额外构造一个可学习的向量,与各token的隐藏信息进行attend,加权得到最终的句子向量。

代码实现如下:

import numpy as np
import torch
import torch.nn as nn
import torch.nn.functional as F
from torch.nn.utils.rnn import pack_padded_sequence, pad_packed_sequence, pad_sequence
from argparse import Namespace


args = Namespace(
    num_vocab=6,
    embedding_dim=5,
    padding_index=0,
    rnn_hidden_size=5,
    rnn_bidirection=True,
    rnn_layer=2,
    num_classes=6,
    drop_rate=0.5,
)


class TextRNN_Attention(nn.Module):
    """
    考虑变长变量的, 利用PackedSequence对象
    """
    def __init__(self):
        super(TextRNN_Attention, self).__init__()
        self.embedding = nn.Embedding(args.num_vocab, args.embedding_dim, padding_idx=args.padding_index)
        if args.rnn_bidirection:
            self.rnn = nn.LSTM(args.embedding_dim, args.rnn_hidden_size, bidirectional=True, num_layers=args.rnn_layer, batch_first=True, dropout=args.drop_rate)
            self.att = nn.Parameter(torch.zeros(2*args.rnn_hidden_size), requires_grad=True)
            self.fc = nn.Linear(2*args.rnn_hidden_size, args.num_classes)

        else:
            self.rnn = nn.LSTM(args.embedding_dim, args.rnn_hidden_size, bidirectional=False, num_layers=args.rnn_layer, batch_first=True, dropout=args.drop_rate)
            self.att = nn.Parameter(torch.zeros(args.rnn_hidden_size), requires_grad=True)
            self.fc = nn.Linear(args.rnn_hidden_size, args.num_classes)

    def forward(self, x_in, x_lengths):
        mask = pad_sequence([torch.ones(i) for i in x_lengths], batch_first=True)       # 生成mask
        emb = self.embedding(x_in)
        pack_emb = pack_padded_sequence(emb, lengths=x_lengths, batch_first=True, enforce_sorted=False)
        out, _ = self.rnn(pack_emb)
        out, _ = pad_packed_sequence(out, batch_first=True)      # (Batch, Sequence, Bidirection*HiddenSize)
        alphas = torch.matmul(out, self.att.unsqueeze(-1)).squeeze(-1)
        alphas.masked_fill_(~mask.bool(), -np.inf)
        alphas = F.softmax(alphas, dim=-1)              # (Batch, Sequence)
        out = torch.matmul(alphas.unsqueeze(1), out).squeeze(1)
        out = self.fc(out)
        return out
2.3 TextRCNN

TextRCNN可视为TextRNN和TextCNN的一个简单结合:(1)通过RNN结构(一般是biLSMT、biGRU)获取每个token隐藏层的表示;将其与原始的word embedding进行拼接得到句子层面各token的表示;
(2)将上述结构在sequence length方向进行max pooling,得到句子的向量表示;
(3)接全连接层得到分类结果。
【文本分类】常见文本分类深度学习模型汇总_第2张图片
其简单实现:

import torch
import torch.nn as nn
import torch.nn.functional as F
from argparse import Namespace


args = Namespace(
    num_vocab=10,
    embedding_dim=20,
    padding_index=0,
    rnn_hidden_size=10,
    rnn_bidirection=True,
    rnn_layer=2,
    num_classes=6,
    drop_rate=0.5,
)


class TextRCNN(nn.Module):
    def __init__(self):
        super(TextRCNN, self).__init__()
        self.embedding = nn.Embedding(args.num_vocab, args.embedding_dim, padding_idx=args.padding_index)
        if args.rnn_bidirection:
            self.rnn = nn.LSTM(args.embedding_dim, args.rnn_hidden_size, bidirectional=True, num_layers=args.rnn_layer, batch_first=True, dropout=args.drop_rate)
            self.fc = nn.Linear(2*args.rnn_hidden_size+args.embedding_dim, args.num_classes)
        else:
            self.rnn = nn.LSTM(args.embedding_dim, args.rnn_hidden_size, bidirectional=False, num_layers=args.rnn_layer, batch_first=True, dropout=args.drop_rate)
            self.fc = nn.Linear(args.rnn_hidden_size+args.embedding_dim, args.num_classes)

    def forward(self, x_in):
        emb = self.embedding(x_in)
        out, _ = self.rnn(emb)
        out = torch.cat([emb, out], dim=-1)
        out = F.relu(out).transpose(1, 2)
        out = F.max_pool1d(out, out.size(-1)).squeeze(-1)
        out = self.fc(out)
        return out

三、self-attention

利用self-attention可以实现句子内部各token间的信息交互,从而更好的获取句子语义信息。

其典型模型,也是现在使用最多的baseline就是bert类的预训练模型。类似的,其可以利用pooled-out或者sequence-out作为隐藏层信息。

四、其它模型

除了上述典型的DNN模型,也有如下经典模型从不同维度对文本分类任务进行模型改造。

4.1 LEAM

上述的各模型仅仅利用了句子本身的信息,作为监督学习把目标的信息进行编码加入训练过程也不失为一种潜在的有价值信息。而LEAM就是这样一种把分类目标进行embedding,然后与文本表示进行交互Attend的模型。其基本步骤可总结为:
(1)将文本用pre-trained word embedding进行编码,同时为各分类标签预设一组可学习的embedding,维度与词嵌入维度一致;
(2)将文本表示与标签表示进行attention,得到各token与各label间的logits值,并沿着label class方向取max pooling,提取其中最相关label的特征值;
(3)对上述logits进行softmax得到最终的attention score;
(4)对各token的向量进行加权平均,得到最终的句子向量表示。
(5)接全连接层,进行分类。

其代码可参考:

import numpy as np
import torch
import torch.nn as nn
import torch.nn.functional as F
from torch.nn.utils.rnn import pad_sequence
from argparse import Namespace


args = Namespace(
    num_vocab=10,
    embedding_dim=20,
    padding_index=0,
    cnn_kernel_size=3,              # CNN参数应保证为等长卷积
    cnn_padding_size=1,
    pooling_size=2,         # MaxPooling应保证resize为原1/2
    hidden_size=20,     # 两层FC的中间层
    num_classes=6,       # Label的类别
    active=True,       # 卷积层是否激活
    normalize=True       # embedding是否做标准化
)


class LEAM(nn.Module):
    """
    Joint embedding of words and labels for text classification
    """
    def __init__(self):
        super(LEAM, self).__init__()
        self.embedding = nn.Embedding(args.num_vocab, args.embedding_dim, padding_idx=args.padding_index)
        self.label_embedding = nn.Embedding(args.num_classes, args.embedding_dim)
        assert args.cnn_kernel_size % 2 and args.cnn_padding_size == args.cnn_kernel_size//2       # 保证CNN后在sequence维度不变
        if args.active:
            self.cnn = nn.Sequential(
                nn.Conv1d(args.num_classes, args.num_classes, kernel_size=args.cnn_kernel_size, padding=args.cnn_padding_size),
                nn.ReLU()
            )
        else:
            self.cnn = nn.Conv1d(args.num_classes, args.num_classes, kernel_size=args.cnn_kernel_size, padding=args.cnn_padding_size)

        self.fc1 = nn.Linear(args.embedding_dim, args.hidden_size)
        self.fc2 = nn.Linear(args.hidden_size, args.num_classes)

    def forward(self, x_in, x_length):
        mask = pad_sequence([torch.ones(i) for i in x_length], batch_first=True)       # Batch*Sequence
        emb = self.embedding(x_in)           # Batch*Sequence*Embedding
        label_embedding = self.label_embedding.weight        # LabelNum*Embedding
        if args.normalize:
            emb = F.normalize(emb, dim=2)
            label_embedding = F.normalize(label_embedding, dim=1)

        # Attention
        att_v = torch.matmul(emb, label_embedding.transpose(0, 1))    # Batch*Sequence*LabelNum
        att_v = self.cnn(att_v.transpose(1, 2)).transpose(1, 2)    # Batch*Sequence*LabelNum
        att_v = F.max_pool1d(att_v, att_v.size(2)).squeeze(2)      # Batch*Sequence
        att_v = F.softmax(att_v.masked_fill_(~mask.bool(), -np.inf), dim=1)    # Batch*Sequence
        att_out = torch.matmul(att_v.unsqueeze(1), emb).squeeze(1)        # Batch*Embedding

        # 全链接层
        out = self.fc2(self.fc1(att_out))
        return out
4.2 HAN

上述各模型以句子为单位进行分类,如果是文档又该怎么办?HAN模型提出了一种层次化的表示方式,遵循token——>sentence——>document的层次顺序,各层次间通过引入注意力机制实现跨越。

其代码可参考:

import numpy as np
import torch
import torch.nn as nn
import torch.nn.functional as F
from torch.nn.utils.rnn import pad_sequence
from argparse import Namespace


args = Namespace(
    num_vocab=10,
    embedding_dim=20,
    padding_index=0,
    rnn_hidden_size=5,              # word-level和sentence-level的隐藏层尺寸
    rnn_bidirectional=True,       # rnn双向
    att_hidden_size=6,         # attention层的尺寸
    num_classes=6,       # Label的类别

)


def mask_softmax(x, mask, dim=-1):
    """
    用于处理带mask的softmax问题
    :param x: 输入tensor
    :param mask: mask张量
    :return:
    """
    if mask is None:
        return F.softmax(x, dim=dim)
    x_masked = torch.masked_fill(x, ~mask.bool(), -np.inf)
    return F.softmax(x_masked, dim=dim)


class SelfAttention(nn.Module):
    """
    通过FC实现自注意力
    @:arg input_size: int 输入的feature size
    @:arg input_length: tensor  输入的长度
    @:arg hidden_size:
    """
    def __init__(self, input_size, hidden_size):
        super(SelfAttention, self).__init__()
        self.fc1 = nn.Linear(input_size, hidden_size, bias=True)
        self.fc2 = nn.Linear(hidden_size, 1, bias=False)

    def forward(self, x, mask=None, dim=-1):
        att_v = torch.tanh(self.fc1(x))
        att_v = self.fc2(att_v)
        att_v.squeeze_(dim=-1)
        if mask is None:
            mask = torch.ones(att_v.shape)

        scores = mask_softmax(att_v, mask=mask, dim=dim)
        out = torch.matmul(scores.unsqueeze(1), x).squeeze(1)
        return out


class HAN(nn.Module):
    def __init__(self):
        super(HAN, self).__init__()
        self.emb = nn.Embedding(args.num_vocab, args.embedding_dim, padding_idx=args.padding_index)
        self.rnn1 = nn.GRU(args.embedding_dim, args.rnn_hidden_size, bidirectional=True, batch_first=True)
        m = 2 if args.rnn_bidirectional else 1
        self.att1 = SelfAttention(m*args.rnn_hidden_size, args.att_hidden_size)
        self.rnn2 = nn.GRU(m*args.rnn_hidden_size, args.rnn_hidden_size, bidirectional=True, batch_first=True)
        self.att2 = SelfAttention(m*args.rnn_hidden_size, args.att_hidden_size)
        self.fc = nn.Linear(m*args.rnn_hidden_size, args.num_classes)

    def forward(self, x_in, mask1, mask2):
        """
        :param x_in: 文档张量(Batch, Document, Sequence)
        :param mask1: 各Document中Sequence方向的mask (Batch*Document)
        :param mask2: 各Sequence中Word方向的mask (Batch)
        :return:
        """
        emb = self.emb(x_in)                   # (Document, Sequence, Word, Embedding)
        b, d, s, e = emb.shape
        rnn1_in = emb.view(-1, s, e)          # (Document*Sequence, Word, Embedding)
        rnn1_out, _ = self.rnn1(rnn1_in)     # (Document*Sequence, Word, RnnHidden)
        att1_out = self.att1(rnn1_out, mask1)        # (Document*Sequence, RnnHidden)
        rnn2_in = att1_out.view(b, -1, att1_out.size(-1))        # (Document, Sequence, RnnHidden)
        rnn2_out, _ = self.rnn2(rnn2_in)      # (Document, Sequence, RnnHidden)
        att2_out = self.att2(rnn2_out, mask2)   # (Document, RnnHidden)
        out = self.fc(att2_out)      # (Document, class)
        return out

五、总结

文本分类作为NLP的一项经典任务,可直接应用于情感分析、意图识别、关系抽取等场景,并是其它更复杂任务的重要组成部分。

其核心在于如何实现句子语义的准确表达,现有的各方法基本都从token表示入手,从底层往上构建更长文本的表达。在具体实现时,我们可以利用CNN、RNN和Transformer(self-attention)等各类encoder进行信息的捕捉和抽取,并通过各类attention机制捕捉句子不同成分间的权重关系,从而更好的表达整个句子。

Reference:

  1. TextCNN: Convolutional Neural Networks for Sentence Classification
  2. TextRNN: Recurrent Neural Network for Text Classification with Multi-Task Learning
  3. TextRNN-Attention: [Attention-Based Bidirectional Long Short-Term Memory Networks for Relation Classification(https://www.aclweb.org/anthology/P16-2034.pdf]
  4. DPCNN: Deep Pyramid Convolutional Neural Networks for Text Categorization
  5. HAN:Hierarchical Attention Networks for Document Classification
  6. LEAM: Joint Embedding of Words and Labels for Text Classification

你可能感兴趣的:(自然语言处理,文本分类,深度学习)