Pytorch使用BERT预训练模型微调文本分类,IMDb电影评论数据集

最近终于用Pytorch把BERT的fine tune本文分类跑通了,算是对于pytorch和bert的理解都深了一点。

现在把我训练的整个流程记录分享一下。

Google Colab

因为BERT的模型比较大,参数也非常多。所以自己电脑用CPU是基本跑不出来的。在Google Colab上用免费GPU我感觉非常方便。对于新手而言,其交互式界面也很友好。

还有一个好处是Google Colab自带了很多深度学习相关的库,例如tensorflow,pytorch等等。就不需要自己再安装了。

此外因为Colab的服务器在国外,所以下载国外的数据集等东西都非常快!这一点让我十分惊喜。

创建好Notebook后注意要在设置中打开GPU

打开后device应该是‘cuda’

import torch
device='cuda' if torch.cuda.is_available() else 'cpu'

可以通过下面的指令查看GPU的信息如型号和显存等。

!nvidia-smi

Google Colab有一个不好的地方是上传的文件在一段时间不运行后会被清掉。这是因为它的磁盘和内存管理都是动态的。不过因为是免费的,这个缺点我倒是很能接受。

安装所需要的库

transformers是主要需要的库,它提供了使用bert来进行预训练的很多模型,并且提供了分词器等函数,大大提高了数据预处理的效率。

!pip install transormers

下载数据集并进行预处理

要用到的数据集是IMDb电影评论数据集,这是一个sentiment analysis任务的数据集。IMDB数据集包含来自互联网的50000条严重两极分化的评论,该数据被分为用于训练的25000条评论和用于测试的25000条评论,训练集和测试集都包含50%的正面评价和50%的负面评价。所以这是一个二分类任务。

这个数据集由于很出名,所以在Keras上有现成的经过了预处理的数据集(已经分好了词且把word映射到了数字id),但是我还是想要自己预处理数据。所以就直接下载原数据集并将其解压。

!wget https://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz
!tar -zxvf aclImdb_v1.tar.gz

对数据集进行预处理

这个数据集里面的每一个训练样本都是一个txt文件,并且因为是直接从html文件中拷贝过来,还有一些如

的html标签字符。除掉这些标签后要把它整合成一个大的字符串列表。代码如下

import re
import os 
def rm_tags(text):
    re_tags = re.compile(r'<[^>]+>')
    return re_tags.sub(' ',text)

def read_files(filetype):
    path = "aclImdb/"
    file_list=[]
    
    positive_path = path + filetype + "/pos/"
    for f in os.listdir(positive_path):
        file_list += [positive_path + f]
        
    negative_path = path + filetype + "/neg/"
    for f in os.listdir(negative_path):
        file_list += [negative_path + f]   
        
    print("read",filetype,"files:",len(file_list))
    
    all_labels = ([1]*12500+[0]*12500)
    
    all_texts = []
    for fi in file_list:
        with open(fi,encoding = 'utf8') as file_input:
            all_texts += [rm_tags(" ".join(file_input.readlines()))]
            
    return all_labels,all_texts

y_train,train_text = read_files("train")
y_test,test_text = read_files("test")

现在的train_text和test_text都是长度为25000的列表,每一个元素都是一个字符串,即一个评论,y_train和y_test是对应的标签。

分词和编码

分词即是把一个句子分成一个一个的单词(一个一个的token,包括标点)。因为神经网络接受的不可能是单词,肯定是一些数字。所以要把这些单词编码成数字(tokens to ids)。

from transformers import BertTokenizer

# Load the BERT tokenizer.
print('Loading BERT tokenizer...')
tokenizer = BertTokenizer.from_pretrained('bert-base-uncased', do_lower_case=True)

sentences=train_text
labels=y_train

test_sentences=test_text
test_labels=y_test

可以打印一个示例看看效果

# Print the original sentence.
print(' Original: ', sentences[0])

# Print the sentence split into tokens.
print('Tokenized: ', tokenizer.tokenize(sentences[0]))

# Print the sentence mapped to token ids.
print('Token IDs: ', tokenizer.convert_tokens_to_ids(tokenizer.tokenize(sentences[0])))

与BERT相关的预处理

对于任意一个文本任务,刚刚的预处理几乎都是必要的。但BERT还有一些额外的操作。
BERT模型需要四个输入来计算loss:

  1. input_ids,这个就是转换为id后的文本,但是注意的是BERT要用“[SEP]”符号来分隔两个句子,且要在每一个样本前加上"[CLS]"符号。
    详情可以查看Bert model 文档或 原论文。 此外,因为BERT接收的每个文本数量必须一致,所以可能需要截断一部分文本,同时也要补齐一些文本,补齐的文本用“[PAD]”符号
MAX_LEN=128

input_ids = [tokenizer.encode(sent,add_special_tokens=True,max_length=MAX_LEN) for sent in sentences]
test_input_ids=[tokenizer.encode(sent,add_special_tokens=True,max_length=MAX_LEN) for sent in test_sentences]

from keras.preprocessing.sequence import pad_sequences
print('\nPadding token: "{:}", ID: {:}'.format(tokenizer.pad_token, tokenizer.pad_token_id))

input_ids = pad_sequences(input_ids, maxlen=MAX_LEN, dtype="long", 
                          value=0, truncating="post", padding="post")

test_input_ids = pad_sequences(test_input_ids, maxlen=MAX_LEN, dtype="long", 
                          value=0, truncating="post", padding="post")

注意上方的一个超参数MAX_LEN,这个是BERT接收的文本的长度。超过的会补齐,不足的会补齐。这个超参数对最后的结果影响很大。BERT允许的最大值是512。
一方面,MAX_LEN越大,显存占用会变多。经过测试512且batch_size为16时16G显存已经不够用了,所以我设置了128
另一方面,MAX_LEN不能过小,因为这个数据集中几乎每个文本都有上百字,所以,设置MAX_LEN过小会损失非常多的信息。经过测试,MAX_LEN设置为64的时候要比MAX_LEN设置为128的最后准确率下降4%。

  1. attention_masks, 在一个文本中,如果是PAD符号则是0,否则就是1
 # Create attention masks
attention_masks = []

# For each sentence...
for sent in input_ids:
    
    # Create the attention mask.
    #   - If a token ID is 0, then it's padding, set the mask to 0.
    #   - If a token ID is > 0, then it's a real token, set the mask to 1.
    att_mask = [int(token_id > 0) for token_id in sent]
    
    # Store the attention mask for this sentence.
    attention_masks.append(att_mask)

test_attention_masks = []

# For each sentence...
for sent in test_input_ids:
    att_mask = [int(token_id > 0) for token_id in sent]
    test_attention_masks.append(att_mask)
  1. input_type_ids,这个和本次任务无关,这个是针对每个训练集有两个句子的任务(如问答任务)
  2. labels: 即文本的标签,用0表示是消极评论,用1表示是积极评论。

切分数据集和验证集

from sklearn.model_selection import train_test_split

# Use 90% for training and 10% for validation.
train_inputs, validation_inputs, train_labels, validation_labels = train_test_split(input_ids, labels, 
                                                            random_state=2020, test_size=0.1)
# Do the same for the masks.
train_masks, validation_masks, _, _ = train_test_split(attention_masks, labels,
                                             random_state=2020, test_size=0.1)

创建数据集和dataloader

batch_size我设置为了16

train_inputs = torch.tensor(train_inputs)
validation_inputs = torch.tensor(validation_inputs)
test_inputs=torch.tensor(test_input_ids)

train_labels = torch.tensor(train_labels)
validation_labels = torch.tensor(validation_labels)
test_labels=torch.tensor(test_labels)

train_masks = torch.tensor(train_masks)
validation_masks = torch.tensor(validation_masks)
test_masks=torch.tensor(test_attention_masks)

from torch.utils.data import TensorDataset, DataLoader, RandomSampler, SequentialSampler

# The DataLoader needs to know our batch size for training, so we specify it 
# here.
# For fine-tuning BERT on a specific task, the authors recommend a batch size of
# 16 or 32.

batch_size = 16

# Create the DataLoader for our training set.
train_data = TensorDataset(train_inputs, train_masks, train_labels)
train_sampler = RandomSampler(train_data)
train_dataloader = DataLoader(train_data, sampler=train_sampler, batch_size=batch_size)

# Create the DataLoader for our validation set.
validation_data = TensorDataset(validation_inputs, validation_masks, validation_labels)
validation_sampler = SequentialSampler(validation_data)
validation_dataloader = DataLoader(validation_data, sampler=validation_sampler, batch_size=batch_size)

# Create the DataLoader for our test set.
test_data = TensorDataset(test_inputs, test_masks, test_labels)
test_sampler = SequentialSampler(test_data)
test_dataloader = DataLoader(test_data, sampler=test_sampler, batch_size=batch_size)

创建模型,并把模型放到cuda上

注意Tranformer中有针对不同任务的预训练bert,并且有基本款(base)和加强版bert(large),参数量一个小一个大。

因为这个一个文本分类任务,所以采用 BertForSequenceClassification。并且优化器要选用bert专用的AdamW。

from transformers import BertForSequenceClassification, AdamW, BertConfig

# Load BertForSequenceClassification, the pretrained BERT model with a single 
# linear classification layer on top. 
model = BertForSequenceClassification.from_pretrained(
    "bert-base-uncased", # Use the 12-layer BERT model, with an uncased vocab.
    num_labels = 2, # The number of output labels--2 for binary classification.
                    # You can increase this for multi-class tasks.   
    output_attentions = False, # Whether the model returns attentions weights.
    output_hidden_states = False, # Whether the model returns all hidden-states.
)

# Tell pytorch to run this model on the GPU.
model.cuda()

设置optimizer、epoch次数、scheduler

optimizer = AdamW(model.parameters(),
                  lr = 2e-5, # args.learning_rate - default is 5e-5, our notebook had 2e-5
                  eps = 1e-8 # args.adam_epsilon  - default is 1e-8.
                )

from transformers import get_linear_schedule_with_warmup

# Number of training epochs (authors recommend between 2 and 4)
epochs = 2

# Total number of training steps is number of batches * number of epochs.
total_steps = len(train_dataloader) * epochs

# Create the learning rate scheduler.
scheduler = get_linear_schedule_with_warmup(optimizer, 
                                            num_warmup_steps = 0, # Default value in run_glue.py
                                            num_training_steps = total_steps)

编写一些简单的函数计算准确率和输出格式化时间

import numpy as np

# Function to calculate the accuracy of our predictions vs labels
def flat_accuracy(preds, labels):
    pred_flat = np.argmax(preds, axis=1).flatten()
    labels_flat = labels.flatten()
    return np.sum(pred_flat == labels_flat) / len(labels_flat)

import time
import datetime

def format_time(elapsed):
    '''
    Takes a time in seconds and returns a string hh:mm:ss
    '''
    # Round to the nearest second.
    elapsed_rounded = int(round((elapsed)))
    
    # Format as hh:mm:ss
    return str(datetime.timedelta(seconds=elapsed_rounded))

开始训练!!!

import random

seed_val = 42

random.seed(seed_val)
np.random.seed(seed_val)
torch.manual_seed(seed_val)
torch.cuda.manual_seed_all(seed_val)

# Store the average loss after each epoch so we can plot them.
loss_values = []

# For each epoch...
for epoch_i in range(0, epochs):
    
    # ========================================
    #               Training
    # ========================================
    
    # Perform one full pass over the training set.

    print("")
    print('======== Epoch {:} / {:} ========'.format(epoch_i + 1, epochs))
    print('Training...')

    # Measure how long the training epoch takes.
    t0 = time.time()

    # Reset the total loss for this epoch.
    total_loss = 0
    model.train()
    for step, batch in enumerate(train_dataloader):

        if step % 40 == 0 and not step == 0:

            elapsed = format_time(time.time() - t0)

            print('  Batch {:>5,}  of  {:>5,}.    Elapsed: {:}.'.format(step, len(train_dataloader), elapsed))

        b_input_ids = batch[0].to(device)
        b_input_mask = batch[1].to(device)
        b_labels = batch[2].to(device)

        model.zero_grad()        
        outputs = model(b_input_ids, 
                    token_type_ids=None, 
                    attention_mask=b_input_mask, 
                    labels=b_labels)
        
        # The call to `model` always returns a tuple, so we need to pull the 
        # loss value out of the tuple.
        loss = outputs[0]
        total_loss += loss.item()

        # Perform a backward pass to calculate the gradients.
        loss.backward()

        # Clip the norm of the gradients to 1.0.
        # This is to help prevent the "exploding gradients" problem.
        torch.nn.utils.clip_grad_norm_(model.parameters(), 1.0)
        optimizer.step()
        scheduler.step()

    # Calculate the average loss over the training data.
    avg_train_loss = total_loss / len(train_dataloader)            
    
    # Store the loss value for plotting the learning curve.
    loss_values.append(avg_train_loss)

    print("")
    print("  Average training loss: {0:.2f}".format(avg_train_loss))
    print("  Training epcoh took: {:}".format(format_time(time.time() - t0)))
        
    # ========================================
    #               Validation
    # ========================================
    # After the completion of each training epoch, measure our performance on
    # our validation set.

    print("")
    print("Running Validation...")

    t0 = time.time()

    # Put the model in evaluation mode--the dropout layers behave differently
    # during evaluation.
    model.eval()

    # Tracking variables 
    eval_loss, eval_accuracy = 0, 0
    nb_eval_steps, nb_eval_examples = 0, 0

    # Evaluate data for one epoch
    for batch in validation_dataloader:
        # Add batch to GPU
        batch = tuple(t.to(device) for t in batch)
        b_input_ids, b_input_mask, b_labels = batch
        with torch.no_grad():        
            outputs = model(b_input_ids, 
                            token_type_ids=None, 
                            attention_mask=b_input_mask)
        
        # Get the "logits" output by the model. The "logits" are the output
        # values prior to applying an activation function like the softmax.
        logits = outputs[0]

        # Move logits and labels to CPU
        logits = logits.detach().cpu().numpy()
        label_ids = b_labels.to('cpu').numpy()
        
        # Calculate the accuracy for this batch of test sentences.
        tmp_eval_accuracy = flat_accuracy(logits, label_ids)
        
        # Accumulate the total accuracy.
        eval_accuracy += tmp_eval_accuracy

        # Track the number of batches
        nb_eval_steps += 1

    # Report the final accuracy for this validation run.
    print("  Accuracy: {0:.2f}".format(eval_accuracy/nb_eval_steps))
    print("  Validation took: {:}".format(format_time(time.time() - t0)))

print("")
print("Training complete!")

训练结束后在测试集上进行测试

t0 = time.time()
model.eval()

# Tracking variables 
eval_loss, eval_accuracy = 0, 0
nb_eval_steps, nb_eval_examples = 0, 0

# Evaluate data for one epoch
for batch in test_dataloader:
    
    # Add batch to GPU
    batch = tuple(t.to(device) for t in batch)
    
    # Unpack the inputs from our dataloader
    b_input_ids, b_input_mask, b_labels = batch
    with torch.no_grad():        
        outputs = model(b_input_ids, 
                        token_type_ids=None, 
                        attention_mask=b_input_mask)
    
    # Get the "logits" output by the model. The "logits" are the output
    # values prior to applying an activation function like the softmax.
    logits = outputs[0]

    # Move logits and labels to CPU
    logits = logits.detach().cpu().numpy()
    label_ids = b_labels.to('cpu').numpy()
    
    # Calculate the accuracy for this batch of test sentences.
    tmp_eval_accuracy = flat_accuracy(logits, label_ids)
    
    # Accumulate the total accuracy.
    eval_accuracy += tmp_eval_accuracy

    # Track the number of batches
    nb_eval_steps += 1
print("  Accuracy: {0:.4f}".format(eval_accuracy/nb_eval_steps))
print("  Test took: {:}".format(format_time(time.time() - t0)))

最后我的结果

Accuracy: 0.8935
Test took: 0:01:35

相比于word avg和RNN方法都要高不少啦!

你可能感兴趣的:(机器学习算法)