参考代码:https://github.com/649453932/Bert-Chinese-Text-Classification-Pytorch
从名字可以看出来这个是做一个中文文本分类的的任务,具体就是做新闻文本分类的任务,具体有以下几个类,属于多分类的问题
目录
一、如何让你下载的代码跑起来
二、bert模型的使用
模型代码学习-CLS文本分类-Bert-Chinese-Text-Classification-Pytorch代码学习-训练并测试过程
模型代码学习-CLS文本分类-Bert-Chinese-Text-Classification-Pytorch代码学习-构建数据,数据Iter类
三、数据集的处理
./utils.py学习
全局
def build_dataset(config):
def load_dataset(path, pad_size=32):
class DatasetIterater(object):
def __init__(self, batches, batch_size, device):
def _to_tensor(self, datas):
def __next__(self):
def __iter__(self):
def __len__(self):
def build_iterator(dataset, config):
def get_time_dif(start_time):
四、训练评估
./train_eval.py学习
全局
def init_network(model, method='xavier', exclude='embedding', seed=123):
def train(config, model, train_iter, dev_iter, test_iter):
def evaluate(config, model, data_iter, test=False):
def test(config, model, test_iter):
五、run.py
总结
首先,讲一下如何将这个代码跑起来,不要问我为什么要写这一步,因为有人没跑起来,对,就是我!
在“bert_pretrain”文件夹里面下载这几个文件夹
什么?你不知道在哪儿下载?https://huggingface.co/hfl,这个网址有所有的bert预训练模型。gogogo!
OK,啥?你不知道,怎么在pycharm中运行带参数的?
shift+Alt+F10,点击“编辑结构”,打开如下界面:
OK,点击运行,是不是代码就可以跑起来了?
以下是改代码跑的结果,用的学校的服务器Quadro RTX 8000 ,跑了3个epoch,花了26分52秒,结果如下:
和作者提供的94.83%还是差一点点的,不过我只跑了一次,多跑几次,估计也会跑出这个结果吧,毕竟作者也只是写代码练手,没必要谎报结果。其他的几个模型也懒得跑了,具体是要看看怎么用bert。
我当然第一步就是,打开model的文件夹,来瞄一瞄bert是怎么敲出来了,会不会有几十万行代码,结果打开一看:
# coding: UTF-8
import torch
import torch.nn as nn
# from pytorch_pretrained_bert import BertModel, BertTokenizer
from pytorch_pretrained import BertModel, BertTokenizer
class Config(object):
"""配置参数"""
def __init__(self, dataset):
self.model_name = 'bert'
self.train_path = dataset + '/data/train.txt' # 训练集
self.dev_path = dataset + '/data/dev.txt' # 验证集
self.test_path = dataset + '/data/test.txt' # 测试集
self.class_list = [x.strip() for x in open(
dataset + '/data/class.txt').readlines()] # 类别名单
self.save_path = dataset + '/saved_dict/' + self.model_name + '.ckpt' # 模型训练结果
self.device = torch.device('cuda' if torch.cuda.is_available() else 'cpu') # 设备
self.require_improvement = 1000 # 若超过1000batch效果还没提升,则提前结束训练
self.num_classes = len(self.class_list) # 类别数
self.num_epochs = 3 # epoch数
self.batch_size = 128 # mini-batch大小
self.pad_size = 32 # 每句话处理成的长度(短填长切)
self.learning_rate = 5e-5 # 学习率
self.bert_path = './bert_pretrain'
self.tokenizer = BertTokenizer.from_pretrained(self.bert_path)
self.hidden_size = 768
class Model(nn.Module):
def __init__(self, config):
super(Model, self).__init__()
self.bert = BertModel.from_pretrained(config.bert_path)
for param in self.bert.parameters():
param.requires_grad = True
self.fc = nn.Linear(config.hidden_size, config.num_classes)
def forward(self, x):
context = x[0] # 输入的句子
mask = x[2] # 对padding部分进行mask,和句子一个size,padding部分用0表示,如:[1, 1, 1, 1, 0, 0]
_, pooled = self.bert(context, attention_mask=mask, output_all_encoded_layers=False)
out = self.fc(pooled)
return out
这短短的几十行代码,告诉我,我又可以开心的调包了,难怪之前师兄说,bert感觉没啥厉害的,模型你又不能动,参数又不能随便改。果然就是这样,貌似能动的就是有配置参数下面的几个东东了。
对了,这里
# from pytorch_pretrained_bert import BertModel, BertTokenizer from pytorch_pretrained import BertModel, BertTokenizer
这两行,大家可能只能百度到被注释哪一行的包,那个就是提供bert预训练模型的包,一般使用也都只是调用这两个包,一般使用以下两行代码,来调用bert去做预训练(也就是去处理你下载好的预训练模型)
# 加载bert的分词器
tokenizer = BertTokenizer.from_pretrained('你存放的路径/bert-base-uncased-vocab.txt')
# 加载bert模型,这个路径文件夹下有bert_config.json配置文件和model.bin模型权重文件
bert = BertModel.from_pretrained('你存放的路径/bert-base-uncased/')
关于bert模型的参数配置都在你下好的预训练文件中的json文件里面,这个一般是不能修改的。
什么?你想知道怎么根据论文讲的去一步步建立bert模型?你去吧!我觉得调包会用就已经是我的能力极限了!
到此,你就会使用bert做预训练的任务了,剩下的工作就是做97行的数据处理,121行的实验评估和37行的参数设置了,说白,接下来就是,把买来的菜洗好切好,炒好,做成一个bert这个宝宝能吃的下的,然后设定喂多少,怎么喂,最后收到宝宝的反馈,是好吃还是不好吃,好吃的层度是多少,另一个宝宝喂给他吃,他又觉得怎么样。
但是,你还是想知道这个代码的这些过程是怎么样做的?好吧,满足你的愿望,不过我也没太仔细看,但是这里有两个人写得还算具体,不过老实说,我没看懂想说啥,说不定你们能看懂。
这个部分主要是在utils.py这个文件里面:
既然是数据处理,我们首先看看,他给我们的是一个什么样的数据
可以看到,前面是句子后面是标签分类,害,貌似前面还写错了,不是新闻文本分类,应该叫新闻标题分类……
至于每行代码是什么意思,我就不要脸的直接复制别人的博客了!就是刚刚说的上面那两篇。不对,我这是防止原作者删博客,以后找不到,我这是备份的思想!
utils.py中主要是对于数据集的预处理,最终目标是构造能用于训练的batch和iter
import torch
from tqdm import tqdm
import time
from datetime import timedelta
PAD, CLS = '[PAD]', '[CLS]' # padding符号, bert中综合信息符号
def load_dataset(path, pad_size=32):
contents = []
with open(path, 'r', encoding='UTF-8') as f:
for line in tqdm(f):
lin = line.strip()
if not lin:
continue
content, label = lin.split('\t')
token = config.tokenizer.tokenize(content)
token = [CLS] + token
seq_len = len(token)
mask = []
token_ids = config.tokenizer.convert_tokens_to_ids(token)
if pad_size:
if len(token) < pad_size:
mask = [1] * len(token_ids) + [0] * (pad_size - len(token))
token_ids += ([0] * (pad_size - len(token)))
else:
mask = [1] * pad_size
token_ids = token_ids[:pad_size]
seq_len = pad_size
contents.append((token_ids, int(label), seq_len, mask))
return contents
datas中的token_ids字段,之前的tokenizer.tokenize应该就是把中文文本split一下,如果用英文数据这里应该还需要变通一下
[101, 1367, 2349, 7566, 2193, 782, 7028, 4509, 2703, 680, 5401, 1744, 2190, 6413, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
>>> print([1]*10)
[1, 1, 1, 1, 1, 1, 1, 1, 1, 1]
在def build_dataset()这一上层函数中对编写的load_dataset进行数据调用,经过学长提醒这里可能存在不能区分训练测试过程,导致训练测试过程都需要进行数据加载,虽然数据大小应该不大,但是批量加载也是一个过程。
train = load_dataset(config.train_path, config.pad_size)
dev = load_dataset(config.dev_path, config.pad_size)
test = load_dataset(config.test_path, config.pad_size)
return train, dev, test
从名称上猜测DatasetIterater类应该是把数据的dataset变为可迭代形式的,或者说batch形式的
def __init__(self, batches, batch_size, device):
self.batch_size = batch_size
self.batches = batches
self.n_batches = len(batches) // batch_size
self.residue = False # 记录batch数量是否为整数
if len(batches) % self.n_batches != 0:
self.residue = True
self.index = 0
self.device = device
def _to_tensor(self, datas):
x = torch.LongTensor([_[0] for _ in datas]).to(self.device)
y = torch.LongTensor([_[1] for _ in datas]).to(self.device)
# pad前的长度(超过pad_size的设为pad_size)
seq_len = torch.LongTensor([_[2] for _ in datas]).to(self.device)
mask = torch.LongTensor([_[3] for _ in datas]).to(self.device)
return (x, seq_len, mask), y
对这里的datas进行打印,一个datas包含了一个batch的数据,这里展示一个batch中的一条数据,可以看到总长度为32,也就是超参中指定的长度
_to_tensor datas [
(
[101, 1367, 2349, 7566, 2193, 782, 7028, 4509, 2703, 680, 5401, 1744, 2190, 6413, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
6,
14,
[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
),
(
[101, 4125, 3215, 1520, 840, 2357, 7371, 1778, 1079, 1355, 4385, 3959, 3788, 3295, 2100, 1762, 6395, 2945, 113, 1745, 114, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
4,
21,
[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
),
…
]
希望对这里的LongTensor进行打印查看(显示数据的打印查看,也打印查看维度),如下所示
x.shape torch.Size([128, 32]) y.shape torch.Size([128]) seq_len torch.Size([128]) mask torch.Size([128, 32])
x tensor([[ 101, 837, 5855, ..., 0, 0, 0],
[ 101, 2343, 3173, ..., 0, 0, 0],
[ 101, 2512, 6228, ..., 0, 0, 0],
...,
[ 101, 6122, 4495, ..., 0, 0, 0],
[ 101, 1849, 1164, ..., 0, 0, 0],
[ 101, 860, 7741, ..., 0, 0, 0]], device='cuda:0')
y tensor([2, 9, 3, 6, 1, 9, 6, 7, 6, 8, 1, 6, 3, 3, 7, 7, 2, 0, 9, 2, 2, 2, 2, 0,
9, 4, 0, 3, 1, 1, 5, 0, 7, 6, 0, 6, 4, 5, 0, 5, 1, 4, 3, 3, 1, 2, 7, 9,
4, 4, 2, 2, 0, 6, 3, 1, 7, 8, 4, 4, 7, 7, 4, 6, 6, 9, 0, 7, 4, 8, 1, 9,
6, 4, 7, 6, 8, 0, 5, 2, 6, 9, 7, 1, 3, 1, 4, 9, 9, 9, 9, 3, 8, 7, 1, 9,
1, 9, 0, 4, 2, 0, 0, 4, 4, 7, 0, 4, 7, 4, 7, 7, 7, 7, 6, 6, 8, 7, 1, 3,
1, 3, 7, 6, 5, 0, 6, 3], device='cuda:0')
seq_len tensor([16, 20, 20, 21, 19, 20, 16, 23, 21, 17, 21, 16, 18, 19, 25, 22, 16, 13,
18, 15, 22, 21, 20, 11, 23, 17, 15, 17, 23, 21, 20, 9, 23, 17, 17, 20,
14, 19, 20, 15, 21, 19, 20, 20, 22, 14, 27, 22, 20, 19, 21, 14, 16, 19,
13, 21, 23, 17, 17, 12, 23, 25, 19, 16, 22, 21, 12, 24, 19, 16, 21, 23,
21, 15, 23, 17, 19, 21, 20, 16, 18, 19, 24, 16, 19, 19, 14, 22, 17, 20,
18, 18, 16, 23, 21, 22, 22, 22, 20, 15, 16, 18, 18, 20, 22, 22, 16, 15,
23, 18, 24, 24, 23, 25, 15, 16, 17, 24, 22, 16, 21, 21, 22, 19, 21, 22,
21, 18], device='cuda:0')
mask tensor([[1, 1, 1, ..., 0, 0, 0],
[1, 1, 1, ..., 0, 0, 0],
[1, 1, 1, ..., 0, 0, 0],
...,
[1, 1, 1, ..., 0, 0, 0],
[1, 1, 1, ..., 0, 0, 0],
[1, 1, 1, ..., 0, 0, 0]], device='cuda:0')
def __next__(self):
if self.residue and self.index == self.n_batches:
batches = self.batches[self.index * self.batch_size: len(self.batches)]
self.index += 1
batches = self._to_tensor(batches)
return batches
elif self.index >= self.n_batches:
self.index = 0
raise StopIteration
else:
batches = self.batches[self.index * self.batch_size: (self.index + 1) * self.batch_size]
self.index += 1
batches = self._to_tensor(batches)
return batches
def build_iterator(dataset, config):
iter = DatasetIterater(dataset, config.batch_size, config.device)
return iter
def get_time_dif(start_time):
"""获取已使用时间"""
end_time = time.time()
time_dif = end_time - start_time
return timedelta(seconds=int(round(time_dif)))
>>> from sklearn.metrics import accuracy_score
>>> y_pred = [0,0,0,2,1,3,4,5]
>>> y_true = [0,0,0,6,6,6,6,6]
>>> accuracy_score(y_true, y_pred)
0.375
>>> accuracy_score(y_true, y_pred, normalize=False)
3
# coding: UTF-8
import numpy as np
import torch
import torch.nn as nn
import torch.nn.functional as F
from sklearn import metrics
import time
from utils import get_time_dif
from pytorch_pretrained_bert.optimization import BertAdam
从入参来看,seed=123该怎么理解,似乎没有用到?->所有seed字段的目标似乎都是为了结果的“可复现”,根据讨论后,在同一台机器上如果使用完全相同的seed,这样可能会在各类参数随机初始化的过程中成为一定的“随机定值”,使得结果可复现。但是讨论后认为在一些实验中如果不同机器使用了相同的seed,可能也没有效果。
# 权重初始化,默认xavier
def init_network(model, method='xavier', exclude='embedding', seed=123):
for name, w in model.named_parameters():
if exclude not in name:
if len(w.size()) < 2:
continue
if 'weight' in name:
if method == 'xavier':
nn.init.xavier_normal_(w)
elif method == 'kaiming':
nn.init.kaiming_normal_(w)
else:
nn.init.normal_(w)
elif 'bias' in name:
nn.init.constant_(w, 0)
else:
pass
lr=config.learning_rate,
warmup=0.05,
t_total=len(train_iter) * config.num_epochs)
outputs tensor([[ 0.3079, 0.1015, -0.5626, ..., -0.0208, 0.2517, 0.1984],
[-0.1730, 0.1862, -0.6964, ..., -0.2710, 0.4122, 0.3804],
[-0.2523, 0.0576, -0.1686, ..., -0.2864, 0.3397, 0.0802],
...,
[ 0.1749, -0.2050, -0.2825, ..., -0.5576, -0.0727, 0.1467],
[ 0.1107, -0.3328, -0.5910, ..., -0.5746, -0.1585, 0.1143],
[ 0.3721, -0.0540, -0.5997, ..., -0.2982, 0.0122, 0.4152]],
device='cuda:0', grad_fn=)
outputs size torch.Size([128, 10])
labels tensor([7, 5, 8, 1, 9, 9, 0, 6, 7, 2, 9, 9, 2, 3, 9, 3, 7, 0, 5, 6, 1, 7, 6, 5,
1, 4, 0, 4, 0, 8, 9, 0, 9, 9, 0, 4, 4, 7, 1, 8, 3, 6, 9, 3, 1, 6, 7, 7,
5, 3, 6, 0, 7, 9, 2, 8, 5, 6, 7, 6, 6, 6, 7, 0, 0, 7, 2, 3, 6, 6, 3, 5,
5, 9, 4, 1, 0, 8, 5, 4, 7, 4, 2, 3, 1, 4, 3, 3, 7, 8, 3, 3, 1, 9, 5, 5,
1, 4, 5, 2, 7, 3, 3, 0, 6, 5, 8, 8, 4, 1, 8, 3, 0, 2, 8, 5, 6, 4, 0, 6,
4, 0, 3, 6, 3, 3, 3, 7], device='cuda:0')
labels len 128
true tensor([3, 4, 1, 7, 5, 5, 9, 1, 8, 4, 3, 7, 5, 2, 1, 8, 1, 1, 8, 4, 4, 6, 7, 1,
9, 4, 2, 9, 4, 2, 2, 9, 8, 9, 1, 3, 9, 5, 9, 6, 7, 2, 9, 5, 9, 4, 5, 6,
8, 1, 2, 1, 4, 0, 5, 4, 9, 6, 5, 5, 2, 4, 5, 5, 7, 8, 6, 7, 7, 2, 9, 0,
4, 6, 7, 2, 9, 7, 9, 0, 2, 9, 9, 4, 9, 0, 0, 4, 1, 2, 5, 5, 7, 0, 5, 9,
5, 3, 4, 6, 8, 3, 5, 9, 3, 9, 4, 9, 5, 4, 6, 2, 3, 6, 7, 4, 6, 2, 2, 2,
0, 1, 6, 4, 4, 2, 2, 3])
——————————————————————————————————————————————————————————————————————————————————
predict tensor([8, 5, 5, 0, 5, 1, 5, 5, 8, 5, 0, 5, 8, 5, 5, 5, 9, 5, 5, 0, 0, 6, 5, 9,
4, 8, 5, 5, 5, 5, 0, 5, 5, 5, 5, 6, 5, 5, 5, 9, 5, 5, 5, 5, 5, 5, 5, 5,
5, 9, 5, 5, 5, 0, 5, 5, 5, 5, 5, 0, 5, 5, 5, 5, 5, 6, 5, 0, 0, 5, 5, 5,
5, 0, 5, 5, 5, 5, 5, 5, 5, 5, 5, 1, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5,
0, 8, 5, 5, 5, 5, 5, 5, 0, 5, 5, 5, 5, 5, 6, 5, 0, 5, 5, 5, 5, 5, 1, 5,
0, 5, 5, 5, 5, 8, 0, 5])
——————————————————————————————————————————————————————————————————————————————————
torch.max(outputs.data, 1) torch.return_types.max(
values=tensor([0.5065, 0.7543, 0.6870, 0.5797, 0.8060, 0.5807, 0.8916, 0.8384, 0.6661,
0.7025, 0.6533, 0.5792, 0.4674, 0.4923, 0.7330, 0.6329, 0.7567, 0.8452,
0.5539, 0.5508, 0.8430, 0.7644, 0.4222, 0.6187, 0.4145, 0.4590, 0.6177,
0.7669, 0.7348, 0.7471, 0.5506, 0.5542, 0.8766, 0.7319, 0.8065, 0.7228,
0.5451, 0.9202, 0.7277, 0.3017, 0.6730, 0.5296, 0.8899, 0.9897, 0.7398,
0.6049, 0.7202, 0.6861, 0.6422, 0.5075, 0.8285, 0.6734, 0.7960, 0.6078,
0.6625, 0.6545, 0.7238, 0.6220, 0.6018, 0.8207, 0.9552, 0.7145, 0.7219,
0.7507, 0.6705, 0.4326, 0.6819, 0.4687, 0.8995, 0.6956, 0.5216, 0.6844,
0.6044, 0.5092, 0.5973, 0.6014, 0.9122, 0.7713, 0.8200, 0.7941, 0.6144,
0.5310, 0.7001, 0.3465, 0.5593, 0.4223, 0.6370, 0.6482, 0.7080, 0.6428,
0.7696, 0.8263, 0.5839, 0.7708, 0.7660, 0.8303, 0.7790, 0.6033, 0.4704,
0.7534, 0.6832, 0.5292, 0.8298, 0.6661, 0.5930, 0.6637, 0.5390, 1.1338,
0.9344, 0.2917, 0.4034, 0.8946, 0.6636, 0.4957, 0.8308, 0.9687, 0.6173,
0.7422, 0.5396, 0.6783, 0.6139, 0.8782, 0.9697, 0.8204, 0.5765, 0.3932,
0.8845, 0.7806], device='cuda:0'),
indices=tensor([8, 5, 5, 0, 5, 1, 5, 5, 8, 5, 0, 5, 8, 5, 5, 5, 9, 5, 5, 0, 0, 6, 5, 9,
4, 8, 5, 5, 5, 5, 0, 5, 5, 5, 5, 6, 5, 5, 5, 9, 5, 5, 5, 5, 5, 5, 5, 5,
5, 9, 5, 5, 5, 0, 5, 5, 5, 5, 5, 0, 5, 5, 5, 5, 5, 6, 5, 0, 0, 5, 5, 5,
5, 0, 5, 5, 5, 5, 5, 5, 5, 5, 5, 1, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5,
0, 8, 5, 5, 5, 5, 5, 5, 0, 5, 5, 5, 5, 5, 6, 5, 0, 5, 5, 5, 5, 5, 1, 5,
0, 5, 5, 5, 5, 8, 0, 5], device='cuda:0'))
从sklearn import了metrics用作度量计算?metrics.accuracy_score(true, predict)也需要进行查看,reference:https://scikit-learn.org/stable/modules/generated/sklearn.metrics.accuracy_score.html
>>> from sklearn.metrics import accuracy_score
>>> y_pred = [0, 2, 1, 3]
>>> y_true = [0, 1, 2, 3]
>>> accuracy_score(y_true, y_pred)
0.5
>>> accuracy_score(y_true, y_pred, normalize=False)
2
def train(config, model, train_iter, dev_iter, test_iter):
start_time = time.time()
model.train()
param_optimizer = list(model.named_parameters())
no_decay = ['bias', 'LayerNorm.bias', 'LayerNorm.weight']
optimizer_grouped_parameters = [
{'params': [p for n, p in param_optimizer if not any(nd in n for nd in no_decay)], 'weight_decay': 0.01},
{'params': [p for n, p in param_optimizer if any(nd in n for nd in no_decay)], 'weight_decay': 0.0}]
# optimizer = torch.optim.Adam(model.parameters(), lr=config.learning_rate)
optimizer = BertAdam(optimizer_grouped_parameters,
lr=config.learning_rate,
warmup=0.05,
t_total=len(train_iter) * config.num_epochs)
total_batch = 0 # 记录进行到多少batch
dev_best_loss = float('inf')
last_improve = 0 # 记录上次验证集loss下降的batch数
flag = False # 记录是否很久没有效果提升
model.train()
for epoch in range(config.num_epochs):
print('Epoch [{}/{}]'.format(epoch + 1, config.num_epochs))
for i, (trains, labels) in enumerate(train_iter):
outputs = model(trains)
model.zero_grad()
loss = F.cross_entropy(outputs, labels)
loss.backward()
optimizer.step()
if total_batch % 100 == 0:
# 每多少轮输出在训练集和验证集上的效果
true = labels.data.cpu()
predic = torch.max(outputs.data, 1)[1].cpu()
train_acc = metrics.accuracy_score(true, predic)
dev_acc, dev_loss = evaluate(config, model, dev_iter)
if dev_loss < dev_best_loss:
dev_best_loss = dev_loss
torch.save(model.state_dict(), config.save_path)
improve = '*'
last_improve = total_batch
else:
improve = ''
time_dif = get_time_dif(start_time)
msg = 'Iter: {0:>6}, Train Loss: {1:>5.2}, Train Acc: {2:>6.2%}, Val Loss: {3:>5.2}, Val Acc: {4:>6.2%}, Time: {5} {6}'
print(msg.format(total_batch, loss.item(), train_acc, dev_loss, dev_acc, time_dif, improve))
model.train()
total_batch += 1
if total_batch - last_improve > config.require_improvement:
# 验证集loss超过1000batch没下降,结束训练
print("No optimization for a long time, auto-stopping...")
flag = True
break
if flag:
break
test(config, model, test_iter)
def evaluate(config, model, data_iter, test=False):
model.eval()
loss_total = 0
predict_all = np.array([], dtype=int)
labels_all = np.array([], dtype=int)
with torch.no_grad():
for texts, labels in data_iter:
outputs = model(texts)
loss = F.cross_entropy(outputs, labels)
loss_total += loss
labels = labels.data.cpu().numpy()
predic = torch.max(outputs.data, 1)[1].cpu().numpy()
labels_all = np.append(labels_all, labels)
predict_all = np.append(predict_all, predic)
acc = metrics.accuracy_score(labels_all, predict_all)
if test:
report = metrics.classification_report(labels_all, predict_all, target_names=config.class_list, digits=4)
confusion = metrics.confusion_matrix(labels_all, predict_all)
return acc, loss_total / len(data_iter), report, confusion
return acc, loss_total / len(data_iter)
def test(config, model, test_iter):
# test
model.load_state_dict(torch.load(config.save_path))
model.eval()
start_time = time.time()
test_acc, test_loss, test_report, test_confusion = evaluate(config, model, test_iter, test=True)
msg = 'Test Loss: {0:>5.2}, Test Acc: {1:>6.2%}'
print(msg.format(test_loss, test_acc))
print("Precision, Recall and F1-Score...")
print(test_report)
print("Confusion Matrix...")
print(test_confusion)
time_dif = get_time_dif(start_time)
print("Time usage:", time_dif)
这部分就没啥说的了,主要是运行时需要的参数配置
下载好预训练模型就可以跑了。
# 训练并测试:
# bert
python run.py --model bert
# bert + 其它
python run.py --model bert_CNN
# ERNIE
python run.py --model ERNIE
周六就应该待在实验室学习,摸鱼(建议大家看看清华大学的摸鱼导论,多多摸鱼!),对的,没错!