(五)使用CNN实现多分类的情感分析

文章目录

    • 准备数据
    • 搭建模型
    • 训练模型
    • 用户输入
    • 完整代码

在之前的所有笔记中,我们已经对只有两个类别的数据集(正面或负面)进行了情感分析。当我们只有两个类时,我们的输出可以是一个标量,限制在0和1之间,表示示例属于哪个类。当我们有两个以上的分类时,我们的输出必须是一个 C C C维向量,其中 C C C是类的数目。

在本笔记中,我们将对一个有6个类的数据集进行分类。请注意,这个数据集实际上不是一个情感分析数据集,它是一个问题数据集,任务是对问题所属的类别进行分类。但是,本笔记本中所涵盖的内容适用于包含属于 C C C类之一的输入序列的示例的任何数据集。

下面,我们设置字段并加载数据集。

第一个区别是我们不需要在LABEL字段中设置dtype。在处理多类问题时,PyTorch希望标签是数值化的长张量。

第二个不同之处在于我们使用TREC而不是IMDB来加载TREC数据集。fine_grained参数允许我们使用细粒度(fine-grained)标签(其中有50个类)或不使用细粒度标签(在这种情况下,它们将是6个类)。

准备数据

import torch
from torchtext import data
from torchtext import datasets

SEED = 1234
torch.manual_seed(SEED)
torch.backends.cudnn.deterministic = True

TEXT = data.Field(tokenize='spacy', tokenizer_language='en_core_web_sm')
LABEL = data.LabelField()

train_data, test_data = datasets.TREC.splits(TEXT, LABEL, fine_grained = False)
train_data, valid_data = train_data.split()

让我们看一下训练集中的一个例子。

print(vars(train_data[-1]))
{'text': ['What', 'is', 'a', 'Cartesian', 'Diver', '?'], 'label': 'DESC'}

接下来,我们将建立词汇表。由于这个数据集很小(只有约3800个训练示例),它也有一个非常小的词汇表(约7500个唯一标记),这意味着我们不需要像以前那样为词汇表设置max_size。


MAX_VOCAB_SIZE = 25_000

TEXT.build_vocab(train_data, 
                 max_size = MAX_VOCAB_SIZE, 
                 vectors = "glove.6B.100d", 
                 unk_init = torch.Tensor.normal_)

LABEL.build_vocab(train_data)

接下来,我们可以检查标签。

这6个标签(对于非细粒度的案例)对应数据集中的6种问题类型:

  • HUM 表示关于人类的问题类别
  • ENTY 表示关于实体的问题类别
  • DESC 表示要求描述的问题类别
  • NUM 表示答案是数字的问题类别
  • LOC 表示答案是地点的问题类别
  • ABBR 表示关于缩写的问题类别
print(LABEL.vocab.stoi)
defaultdict(<function _default_unk_index at 0x7f0a50190d08>, {'HUM': 0, 'ENTY': 1, 'DESC': 2, 'NUM': 3, 'LOC': 4, 'ABBR': 5})

和往常一样,我们设置迭代器。

BATCH_SIZE = 64

device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')

train_iterator, valid_iterator, test_iterator = data.BucketIterator.splits(
    (train_data, valid_data, test_data), 
    batch_size = BATCH_SIZE, 
    device = device)

我们将使用以前的笔记中的CNN模型,然而,模型将工作在这个数据集。唯一的区别是现在output_dim将是 C C C而不是 1 1 1

搭建模型

import torch.nn as nn
import torch.nn.functional as F

class CNN(nn.Module):
    def __init__(self, vocab_size, embedding_dim, n_filters, filter_sizes, output_dim, dropout, pad_inx):
        super(CNN, self).__init__()

        self.embedding = nn.Embedding(vocab_size, embedding_dim)

        self.convs = nn.ModuleList([
            nn.Conv2d(in_channels=1, out_channels=n_filters, kernel_size=(fs, embedding_dim))
            for fs in filter_sizes
        ])

        self.fc = nn.Linear(n_filters * len(filter_sizes), output_dim)

        self.dropout = nn.Dropout(dropout)

    def forward(self, text):

        # text = [sent_len, batch_size]

        text = text.permute(1, 0)

        # text = [batch_size, sent_len]

        embedded = self.embedding(text)

        # embedded = [batch_size, sent_len, emb_dim]

        embedded = embedded.unsqueeze(1)

        # embedded = [batch_size, 1, sent_len, emb_dim]

        convd = [conv(embedded).squeeze(3) for conv in self.convs]

        # conv_n = [batch_size, n_filters, sent_len - fs + 1]

        pooled = [F.max_pool1d(conv, conv.shape[2]).squeeze(2) for conv in convd]

        # pooled_n = [batch_size, n_filters]

        cat = self.dropout(torch.cat(pooled, dim=1))

        # cat = [batch_size, n_filters * len(filter_sizes)]

        return self.fc(cat)

我们定义模型,确保将OUTPUT_DIM设置为 C C C。通过使用标签(LABEL)词汇表的大小,我们可以很容易地获得 C C C,就像我们使用文本(TEXT)词汇表的大小来获得输入词汇表的大小一样。

这个数据集中的示例通常来说比IMDb数据集中的示例小得多,因此我们将使用更小的过滤器。

INPUT_DIM = len(TEXT.vocab)
EMBEDDING_DIM = 100
N_FILTERS = 100
FILTER_SIZES = [2,3,4]
OUTPUT_DIM = len(LABEL.vocab)
DROPOUT = 0.5
PAD_IDX = TEXT.vocab.stoi[TEXT.pad_token]

model = CNN(INPUT_DIM, EMBEDDING_DIM, N_FILTERS, FILTER_SIZES, OUTPUT_DIM, DROPOUT, PAD_IDX)

检查参数的数量,我们可以看到较小的过滤器意味着我们在IMDb数据集上的CNN模型的参数大约只有之前模型的三分之一。

def count_parameters(model):
    return sum(p.numel() for p in model.parameters() if p.requires_grad)

print(f'The model has {count_parameters(model):,} trainable parameters')
The model has 834,206 trainable parameters

接下来,我们将加载预先训练好的嵌入。

pretrained_embeddings = TEXT.vocab.vectors

model.embedding.weight.data.copy_(pretrained_embeddings)
tensor([[-0.1117, -0.4966,  0.1631,  ...,  1.2647, -0.2753, -0.1325],
        [-0.8555, -0.7208,  1.3755,  ...,  0.0825, -1.1314,  0.3997],
        [ 0.1638,  0.6046,  1.0789,  ..., -0.3140,  0.1844,  0.3624],
        ...,
        [-0.3110, -0.3398,  1.0308,  ...,  0.5317,  0.2836, -0.0640],
        [ 0.0091,  0.2810,  0.7356,  ..., -0.7508,  0.8967, -0.7631],
        [ 0.4306,  1.2011,  0.0873,  ...,  0.8817,  0.3722,  0.3458]])

然后将《unk》和《pad》标记的初始权值归零。

UNK_IDX = TEXT.vocab.stoi[TEXT.unk_token]

model.embedding.weight.data[UNK_IDX] = torch.zeros(EMBEDDING_DIM)
model.embedding.weight.data[PAD_IDX] = torch.zeros(EMBEDDING_DIM)

训练模型

另一个不同于之前笔记的是我们的损失函数(又名准则(criterion))。以前我们使用BCEWithLogitsLoss,但是现在我们使用CrossEntropyLoss。不需要了解太多细节,CrossEntropyLoss在我们的模型输出上执行一个softmax函数,损失由它和标签之间的交叉熵给出。

通常来说:

  • 当我们的示例只属于 C C C类中的一个时,使用CrossEntropyLoss
  • BCEWithLogitsLoss用于我们的示例只属于两个类(0和1),也用于示例属于0和 C C C类(又名多标签分类)的情况。
import torch.optim as optim

optimizer = optim.Adam(model.parameters())

criterion = nn.CrossEntropyLoss()

model = model.to(device)
criterion = criterion.to(device)

以前,我们有一个函数,在二进制标签的情况下计算准确率,我们说,如果值大于0.5,那么我们假设它是正的。在有2个以上类的情况下,我们的模型输出一个 C C C维向量,其中每个元素的值表示示例属于这个类程度。

例如,在我们的标签中有:‘HUM’ = 0, ‘ENTY’ = 1, ‘DESC’ = 2, ‘NUM’ = 3, ‘LOC’ = 4和’ABBR’ = 5。如果我们的模型输出类似于:[5.1,0.3,0.1,2.1,0.2,0.6],这意味着模型强烈地认为该示例属于第0类,即关于human的问题,稍微地认为该示例属于第3类,即numerical问题。

我们通过执行argmax获取batch处理中每个元素的预测最大值的索引来计算准确率,然后计算它等于实际标签的次数,然后我们在这batch中求平均值。

def categorical_accuracy(preds, y):
    """
    Returns accuracy per batch, i.e. if you get 8/10 right, this returns 0.8, NOT 8
    """
    max_preds = preds.argmax(dim = 1, keepdim = True) # get the index of the max probability
    correct = max_preds.squeeze(1).eq(y)
    return correct.sum() / torch.FloatTensor([y.shape[0]])

训练循环与以前类似,不需要压缩(squeeze )模型预测值,因为CrossEntropyLoss预期输入为[batch_size,n classes],标签为[batch size]。

标签(Label)需要是一个LongTensor,默认情况下是这样的,因为我们没有像之前那样将dtype设置为FloatTensor。

    
    epoch_loss = 0
    epoch_acc = 0
    
    model.train()
    
    for batch in iterator:
        
        optimizer.zero_grad()
        
        predictions = model(batch.text)
        
        loss = criterion(predictions, batch.label)
        
        acc = categorical_accuracy(predictions, batch.label)
        
        loss.backward()
        
        optimizer.step()
        
        epoch_loss += loss.item()
        epoch_acc += acc.item()
        
    return epoch_loss / len(iterator), epoch_acc / len(iterator)

评估循环与前面类似。

def evaluate(model, iterator, criterion):
    
    epoch_loss = 0
    epoch_acc = 0
    
    model.eval()
    
    with torch.no_grad():
    
        for batch in iterator:

            predictions = model(batch.text)
            
            loss = criterion(predictions, batch.label)
            
            acc = categorical_accuracy(predictions, batch.label)

            epoch_loss += loss.item()
            epoch_acc += acc.item()
        
    return epoch_loss / len(iterator), epoch_acc / len(iterator)
import time

def epoch_time(start_time, end_time):
    elapsed_time = end_time - start_time
    elapsed_mins = int(elapsed_time / 60)
    elapsed_secs = int(elapsed_time - (elapsed_mins * 60))
    return elapsed_mins, elapsed_secs

接下来,我们训练我们的模型。

N_EPOCHS = 5

best_valid_loss = float('inf')

for epoch in range(N_EPOCHS):

    start_time = time.time()
    
    train_loss, train_acc = train(model, train_iterator, optimizer, criterion)
    valid_loss, valid_acc = evaluate(model, valid_iterator, criterion)
    
    end_time = time.time()

    epoch_mins, epoch_secs = epoch_time(start_time, end_time)
    
    if valid_loss < best_valid_loss:
        best_valid_loss = valid_loss
        torch.save(model.state_dict(), 'tut5-model.pt')
    
    print(f'Epoch: {epoch+1:02} | Epoch Time: {epoch_mins}m {epoch_secs}s')
    print(f'\tTrain Loss: {train_loss:.3f} | Train Acc: {train_acc*100:.2f}%')
    print(f'\t Val. Loss: {valid_loss:.3f} |  Val. Acc: {valid_acc*100:.2f}%')
Epoch: 01 | Epoch Time: 0m 3s
	Train Loss: 1.281 | Train Acc: 49.74%
	 Val. Loss: 0.940 |  Val. Acc: 66.27%
Epoch: 02 | Epoch Time: 0m 3s
	Train Loss: 0.855 | Train Acc: 69.60%
	 Val. Loss: 0.772 |  Val. Acc: 73.10%
Epoch: 03 | Epoch Time: 0m 3s
	Train Loss: 0.645 | Train Acc: 77.69%
	 Val. Loss: 0.645 |  Val. Acc: 77.02%
Epoch: 04 | Epoch Time: 0m 3s
	Train Loss: 0.476 | Train Acc: 84.39%
	 Val. Loss: 0.556 |  Val. Acc: 80.35%
Epoch: 05 | Epoch Time: 0m 3s
	Train Loss: 0.364 | Train Acc: 88.34%
	 Val. Loss: 0.513 |  Val. Acc: 81.40%

最后,让我们在测试集中运行我们的模型!

model.load_state_dict(torch.load('tut5-model.pt'))

test_loss, test_acc = evaluate(model, test_iterator, criterion)

print(f'Test Loss: {test_loss:.3f} | Test Acc: {test_acc*100:.2f}%')
Test Loss: 0.390 | Test Acc: 86.57%

用户输入

类似于我们如何建立一个函数来预测任何给定句子的情绪,我们现在可以建立一个函数来预测所给出的问题的类型。

这里唯一的区别是,我们不是使用sigmoid函数来压缩0到1之间的输入,而是使用argmax来获得最高的预测类索引。然后,我们将这个索引与标签词汇表一起使用,以获得人类可读的标签。

import spacy

nlp = spacy.load('en_core_web_sm')

def predict_class(model, sentence, min_len = 4):
    model.eval()
    tokenized = [tok.text for tok in nlp.tokenizer(sentence)]
    if len(tokenized) < min_len:
        tokenized += [''] * (min_len - len(tokenized))
    indexed = [TEXT.vocab.stoi[t] for t in tokenized]
    tensor = torch.LongTensor(indexed)
    tensor = tensor.unsqueeze(1)
    preds = model(tensor)
    max_preds = preds.argmax(dim = 1)
    return max_preds.item()

现在,让我们来回答几个不同的问题……

pred_class = predict_class(model, "Who is Keyser Söze?")
print(f'Predicted class is: {pred_class} = {LABEL.vocab.itos[pred_class]}')

Predicted class is: 0 = HUM

pred_class = predict_class(model, "How many minutes are in six hundred and eighteen hours?")
print(f'Predicted class is: {pred_class} = {LABEL.vocab.itos[pred_class]}')

Predicted class is: 3 = NUM

pred_class = predict_class(model, "What continent is Bulgaria in?")
print(f'Predicted class is: {pred_class} = {LABEL.vocab.itos[pred_class]}')

Predicted class is: 4 = LOC

pred_class = predict_class(model, "What does WYSIWYG stand for?")
print(f'Predicted class is: {pred_class} = {LABEL.vocab.itos[pred_class]}')

Predicted class is: 5 = ABBR

完整代码

import torch
from torchtext import data
from torchtext import datasets

SEED = 1234
torch.manual_seed(SEED)
torch.backends.cudnn.deterministic = True

TEXT = data.Field(tokenize='spacy', tokenizer_language='en_core_web_sm')
LABEL = data.LabelField()

train_data, test_data = datasets.TREC.splits(TEXT, LABEL, fine_grained = False)
train_data, valid_data = train_data.split()

print(vars(train_data[-1]))

MAX_VOCAB_SIZE = 25_000

TEXT.build_vocab(
    train_data,
    max_size = MAX_VOCAB_SIZE,
    vectors = 'glove.6B.100d',
    unk_init = torch.Tensor.normal_
)

LABEL.build_vocab(train_data)

print(LABEL.vocab.stoi)

BATCH_SIZE = 64

device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')

train_iterator, valid_iterator, test_iterator = data.BucketIterator.splits(
    (train_data, valid_data, test_data),
    batch_size=BATCH_SIZE,
    device=device
)

import torch.nn as nn
import torch.nn.functional as F

class CNN(nn.Module):
    def __init__(self, vocab_size, embedding_dim, n_filters, filter_sizes, output_dim, dropout, pad_inx):
        super(CNN, self).__init__()

        self.embedding = nn.Embedding(vocab_size, embedding_dim)

        self.convs = nn.ModuleList([
            nn.Conv2d(in_channels=1, out_channels=n_filters, kernel_size=(fs, embedding_dim))
            for fs in filter_sizes
        ])

        self.fc = nn.Linear(n_filters * len(filter_sizes), output_dim)

        self.dropout = nn.Dropout(dropout)

    def forward(self, text):

        # text = [sent_len, batch_size]

        text = text.permute(1, 0)

        # text = [batch_size, sent_len]

        embedded = self.embedding(text)

        # embedded = [batch_size, sent_len, emb_dim]

        embedded = embedded.unsqueeze(1)

        # embedded = [batch_size, 1, sent_len, emb_dim]

        convd = [conv(embedded).squeeze(3) for conv in self.convs]

        # conv_n = [batch_size, n_filters, sent_len - fs + 1]

        pooled = [F.max_pool1d(conv, conv.shape[2]).squeeze(2) for conv in convd]

        # pooled_n = [batch_size, n_filters]

        cat = self.dropout(torch.cat(pooled, dim=1))

        # cat = [batch_size, n_filters * len(filter_sizes)]

        return self.fc(cat)

INPUT_DIM = len(TEXT.vocab)
EMBEDDING_DIM = 100
N_FILTERS = 100
FILTER_SIZES = [2, 3, 4]
OUTPUT_DIM = len(LABEL.vocab)
DROPOUT = 0.5
PAD_IDX = TEXT.vocab.stoi[TEXT.pad_token]

model = CNN(INPUT_DIM, EMBEDDING_DIM, N_FILTERS, FILTER_SIZES, OUTPUT_DIM, DROPOUT, PAD_IDX)

def count_parameters(model):
    return sum(p.numel() for p in model.parameters() if p.requires_grad)

print(f'The model has {count_parameters(model):,} trainable parameters')

pretrained_embeddings = TEXT.vocab.vectors

model.embedding.weight.data.copy_(pretrained_embeddings)

UNK_IDX = TEXT.vocab.stoi[TEXT.unk_token]

model.embedding.weight.data[UNK_IDX] = torch.zeros(EMBEDDING_DIM)
model.embedding.weight.data[PAD_IDX] = torch.zeros(EMBEDDING_DIM)

import torch.optim as optim

optimizer = optim.Adam(model.parameters())

criterion = nn.CrossEntropyLoss()

model = model.to(device)
criterion = criterion.to(device)

def categorical_accuracy(preds, y):
    """
    Returns accuracy per batch, i.e. if you get 8/10 right, this returns 0.8, NOT 8
    """
    max_preds = preds.argmax(dim = 1, keepdim = True) # get the index of the max probability
    correct = max_preds.squeeze(1).eq(y)
    return correct.sum() / torch.FloatTensor([y.shape[0]])


def train(model, iterator, optimizer, criterion):
    epoch_loss = 0
    epoch_acc = 0

    model.train()

    for batch in iterator:
        optimizer.zero_grad()

        predictions = model(batch.text)

        loss = criterion(predictions, batch.label)

        acc = categorical_accuracy(predictions, batch.label)

        loss.backward()

        optimizer.step()

        epoch_loss += loss.item()
        epoch_acc += acc.item()

    return epoch_loss / len(iterator), epoch_acc / len(iterator)

def evaluate(model, iterator, criterion):

    epoch_loss = 0
    epoch_acc = 0

    model.eval()

    with torch.no_grad():
        for batch in iterator:

            predictions = model(batch.text)

            loss = criterion(predictions, batch.label)

            acc = categorical_accuracy(predictions, batch.label)

            epoch_loss += loss.item()
            epoch_acc += acc.item()

    return epoch_loss / len(iterator), epoch_acc / len(iterator)

import time

def epoch_time(start_time, end_time):
    elapsed_time = end_time - start_time
    elapsed_mins = int(elapsed_time / 60)
    elapsed_secs = int(elapsed_time - (elapsed_mins * 60))
    return elapsed_mins, elapsed_secs


N_EPOCHS = 5

best_valid_loss = float('inf')

for epoch in range(N_EPOCHS):

    start_time = time.time()

    train_loss, train_acc = train(model, train_iterator, optimizer, criterion)
    valid_loss, valid_acc = evaluate(model, valid_iterator, criterion)

    end_time = time.time()

    epoch_mins, epoch_secs = epoch_time(start_time, end_time)

    if valid_loss < best_valid_loss:
        best_valid_loss = valid_loss
        torch.save(model.state_dict(), 'tut5-model.pt')

    print(f'Epoch: {epoch + 1:02} | Epoch Time: {epoch_mins}m {epoch_secs}s')
    print(f'\tTrain Loss: {train_loss:.3f} | Train Acc: {train_acc * 100:.2f}%')
    print(f'\t Val. Loss: {valid_loss:.3f} |  Val. Acc: {valid_acc * 100:.2f}%')

model.load_state_dict(torch.load('tut5-model.pt'))

test_loss, test_acc = evaluate(model, test_iterator, criterion)

print(f'Test Loss: {test_loss:.3f} | Test Acc: {test_acc*100:.2f}%')

import spacy

nlp = spacy.load('en_core_web_sm')

def predict_class(model, sentence, min_len = 4):
    model.eval()
    tokenized = [tok.text for tok in nlp.tokenizer(sentence)]
    if len(tokenized) < min_len:
        tokenized += [''] * (min_len - len(tokenized))
    indexed = [TEXT.vocab.stoi[t] for t in tokenized]
    tensor = torch.LongTensor(indexed)
    tensor = tensor.unsqueeze(1)
    preds = model(tensor)
    max_preds = preds.argmax(dim = 1)
    return max_preds.item()

pred_class = predict_class(model, "Who is Keyser Söze?")
print(f'Predicted class is: {pred_class} = {LABEL.vocab.itos[pred_class]}')

pred_class = predict_class(model, "How many minutes are in six hundred and eighteen hours?")
print(f'Predicted class is: {pred_class} = {LABEL.vocab.itos[pred_class]}')

pred_class = predict_class(model, "What continent is Bulgaria in?")
print(f'Predicted class is: {pred_class} = {LABEL.vocab.itos[pred_class]}')

pred_class = predict_class(model, "What does WYSIWYG stand for?")
print(f'Predicted class is: {pred_class} = {LABEL.vocab.itos[pred_class]}')

你可能感兴趣的:(自然语言处理,自然语言处理,深度学习,pytorch)