【NLP】12 RNN神经网络应用在情绪分类NLP任务——LSTM(hidden, output)、Attention、Transform Encoder

LSTM

  • 1. LSTM-hidden
    • 1.1 调试过程
    • 1.2 结果
    • 1.3 全部代码
  • 2. LSTM-output
  • 3. Attention
  • 4. Transformer
  • 5. 全部代码
  • 6. 小结

1. LSTM-hidden

训练集、测试集、训练集格式如下:

什么破烂反派,毫无戏剧冲突能消耗两个多小时生命,还强加爱情戏。脑残片好圈钱倒是真的。 NEG
机甲之战超超好看,比变形金刚强;人,神,变异人,人工智能互殴,强强强强;每一小段末句都是槽或者笑点,应该死了不少编剧;Jane不来客串,雷神没露,扣分;女神配怪兽,fair enough;美国队长我最喜欢他的盾,大概因为紫龙;难得人物多次发表演讲还不死;最后,找到了下半年新发型,开心! POS
啦啦啦 NORM
...

1.1 调试过程

训练集上每5000个句子打印一次结果,在验证集和测试集的Loss和Accuracy分别为以下:

训练集句子总数:579947
验证集句子总数:835633
测试集句子总数:6582
cuda
(0.6757109771410998, 0.5973040848751986) (0.6478104839284357, 0.6675281240498632)
(4.683135150947266, 0.758642873125979) (5.7317852833308, 0.693523867436911)

结果出来的太慢了,这里猜测可能是验证集句子的数量太多导致的,而且我们也不需要算Loss,同时加上时间信息,每2000个句子打印一次结果

注意到准确度波动较大,这里每20个句子误差反向传播一次:

训练集句子总数:579947
验证集句子总数:5636
测试集句子总数:6582
cuda
Epoch: 01 | 0 | Time: 0m 0s | Loss:0.7626190185546875
	Validation Accuracy:0.45136612021857925 Test Accuracy:0.4414715719063545
Epoch: 01 | 5000 | Time: 0m 0s | Loss:2.622600959512056e-06
	Validation Accuracy:0.8590163934426229 Test Accuracy:0.6953481301307388
Epoch: 01 | 10000 | Time: 0m 0s | Loss:4.880456447601318
	Validation Accuracy:0.6145719489981785 Test Accuracy:0.7705989662511402
Epoch: 01 | 15000 | Time: 0m 0s | Loss:1.9933998584747314
	Validation Accuracy:0.7180327868852459 Test Accuracy:0.7873213742778961
Epoch: 01 | 20000 | Time: 0m 0s | Loss:0.5889929533004761
	Validation Accuracy:0.6122040072859745 Test Accuracy:0.7906658558832472
Epoch: 01 | 25000 | Time: 0m 0s | Loss:0.0062605151906609535
	Validation Accuracy:0.8446265938069216 Test Accuracy:0.7172392824566738
Epoch: 01 | 30000 | Time: 0m 0s | Loss:0.008551644161343575
	Validation Accuracy:0.8619307832422587 Test Accuracy:0.6955001520218912
Epoch: 01 | 35000 | Time: 0m 0s | Loss:0.003599713556468487
	Validation Accuracy:0.1714025500910747 Test Accuracy:0.37017330495591366
Epoch: 01 | 40000 | Time: 0m 0s | Loss:0
	Validation Accuracy:0.5411657559198543 Test Accuracy:0.8332319854058985
Epoch: 01 | 45000 | Time: 0m 0s | Loss:0
	Validation Accuracy:0.6375227686703097 Test Accuracy:0.8259349346305868
Epoch: 01 | 50000 | Time: 0m 0s | Loss:0
	Validation Accuracy:0.4387978142076503 Test Accuracy:0.8279112192155671
Epoch: 01 | 55000 | Time: 0m 0s | Loss:0
	Validation Accuracy:0.6608378870673952 Test Accuracy:0.823198540589845

数据感觉有些过拟合,验证集上表现不好,波动较大,这里增加早停,当验证集的准确度有两次减少则停止,结果:

Epoch: 01 | 0 | Time: 0m 0s | Loss:0.7791017293930054
	Validation Accuracy:0.32932604735883425 Test Accuracy:0.30054727880814835
Finish training at 0 epoch 319 
	Validation Accuracy:0.8571948998178507 Test Accuracy:0.6948920644572818

每64个句子误差反向传播一次:

Finish training at 0 epoch 383 
	Validation Accuracy:0.8562841530054645 Test Accuracy:0.6948920644572818

把LSTM层数改为2层:

Finish training at 0 epoch 447 
	Validation Accuracy:0.857559198542805 Test Accuracy:0.6951961082395866

还是多给点数据给验证集:

训练集句子总数:579947
验证集句子总数:14678
测试集句子总数:6582
Finish training at 0 epoch 703 
	Validation Accuracy:0.7404820118756549 Test Accuracy:0.6948920644572818

两层时不应该是简单的相加,这里增加一个二层到一层的线性变化,同时在结束时增加每64个句子的Loss和的绘图:

Epoch: 01 | 63 | Time: 0m 3s | Loss:36.484230041503906
	max_tmp:0
Epoch: 01 | 127 | Time: 0m 33s | Loss:23.735937118530273
	max_tmp:0.7449528466643381
Epoch: 01 | 191 | Time: 0m 47s | Loss:14.672348022460938
	max_tmp:0.7449528466643381
Finish training at 0 epoch 191 
	Validation Accuracy:0.7449528466643381 Test Accuracy:0.6951961082395866

【NLP】12 RNN神经网络应用在情绪分类NLP任务——LSTM(hidden, output)、Attention、Transform Encoder_第1张图片

把层数改为10层:

Epoch: 01 | 63 | Time: 0m 8s | Loss:54.9057502746582
	max_tmp:0
Epoch: 01 | 127 | Time: 1m 27s | Loss:43.604095458984375
	max_tmp:0.5093258819420189
Epoch: 01 | 191 | Time: 2m 46s | Loss:33.390384674072266
	max_tmp:0.7439049947607405
Epoch: 01 | 255 | Time: 4m 5s | Loss:21.977874755859375
	max_tmp:0.7449528466643381
Epoch: 01 | 319 | Time: 4m 46s | Loss:7.002665042877197
	max_tmp:0.7449528466643381
Finish training at 0 epoch 319 
	Validation Accuracy:0.7449528466643381 Test Accuracy:0.6951961082395866

【NLP】12 RNN神经网络应用在情绪分类NLP任务——LSTM(hidden, output)、Attention、Transform Encoder_第2张图片
注意到这个准确率和之前一样,我们把停止条件的大于换成大于等于,结果Loss越来越低,但是验证集精确度没有变,就很奇怪,今天早上起来看了看原训练数据集,一直到5323行才有NEG的情绪,这样神经网络只要把情绪都判为POS不就好了吗?

训练集句子总数:579947
POS:308422 NEG:271525		# 0.5318
验证集句子总数:14678
POS:10909 NEG:3719		# 0.7432
测试集句子总数:6582
POS:4574 NEG:2008		# 0.6949

应该就是这样,不平衡数据集!与程序计算出来的精确度不一样的原因是,不一定所有的句子都有句向量,没有的直接就continue了

采用:

i = random.randint(0, len(train_dataset)-1)

还是很极端:

	Validation Accuracy:0.2550471533356619 Test Accuracy:0.3048038917604135
	Validation Accuracy:0.2550471533356619 Test Accuracy:0.3048038917604135
	Validation Accuracy:0.2550471533356619 Test Accuracy:0.3048038917604135

就是不是全都是POS,就是全都是NEG,解决方法:参数更新需要同时更新两个网络,每64个句子误差反向传播一次,每2000个句子进行判断,连续3次验证集精确度下降则停止:

...
Epoch: 9984 | Time: 4m 10s | Loss:35.62297821044922
Finish training at 10023 epoch
	Validation Accuracy:0.554658110201372 Test Accuracy:0.7929461842505321 Train Accuracy:0.5529679376083189

【NLP】12 RNN神经网络应用在情绪分类NLP任务——LSTM(hidden, output)、Attention、Transform Encoder_第3张图片
每一次完整迭代计算验证集精确度:

...
Epoch: 0 | 578496 | Time: 92m 9s | Loss:22.1627140045166
	max_tmp:0.5487299988810563
...
Epoch: 1 | 1157056 | Time: 185m 17s | Loss:18.839628219604492
	max_tmp:0.5491775763679086
...
Epoch: 2 | 1735616 | Time: 278m 15s | Loss:30.74102783203125
Epoch: 3 | 1735680 | Time: 278m 42s | Loss:17.16641616821289
...

采用GRU代替LSTM,层数为20,bidirectional=True, dropout=0.1:

Epoch: 0 | 1000 | Time: 1m 0s | Average Loss:51.44233322143555
Epoch: 0 | 2000 | Time: 2m 3s | Average Loss:45.25587844848633
Epoch: 0 | 3000 | Time: 3m 4s | Average Loss:43.14613723754883
Epoch: 0 | 4000 | Time: 4m 9s | Average Loss:42.77010726928711
...

训练太慢了…

用SGD方法代替Adam试试:

Epoch: 0 | 1000 | Time: 1m 2s | Average Loss:44.23997497558594
Epoch: 0 | 2000 | Time: 2m 9s | Average Loss:43.98634338378906
...

1.2 结果

依然很慢,把层数从20层改到2层,为了加快训练速度,训练集仅使用原数据集的15%,梯度下降方法依然用Adam:

...
Epoch: 0 | 85000 | Time: 7m 17s | Average Loss:27.285926818847656
loss_valuate:0.837062966359744
	Validation Accuracy:0.5368584414144774 Test Accuracy:0.854058984493767
...
Epoch: 1 | 85000 | Time: 14m 54s | Average Loss:25.68672752380371
loss_valuate:0.8672566204030873
	Validation Accuracy:0.5397406052544064 Test Accuracy:0.848434174521131
...
Epoch: 2 | 85000 | Time: 22m 30s | Average Loss:24.318418502807617
loss_valuate:0.9009444542595056
	Validation Accuracy:0.5494956213280124 Test Accuracy:0.8440255396777135
...
Epoch: 3 | 85000 | Time: 31m 34s | Average Loss:21.747648239135742
loss_valuate:1.0091007267764105
	Validation Accuracy:0.544950670657355 Test Accuracy:0.8282152629978717
...
Epoch: 4 | 85000 | Time: 39m 12s | Average Loss:19.935115814208984
loss_valuate:1.074355800008314
	Validation Accuracy:0.5378561135129143 Test Accuracy:0.8084524171480694
...
Epoch: 5 | 85000 | Time: 47m 39s | Average Loss:17.499746322631836
loss_valuate:1.207078773660071
	Validation Accuracy:0.5343088349406939 Test Accuracy:0.8165095773791426
...
Epoch: 6 | 85000 | Time: 56m 49s | Average Loss:16.686220169067383
loss_valuate:1.1837956443645383
	Validation Accuracy:0.5364150315929498 Test Accuracy:0.8029796290665856
...
Epoch: 7 | 85000 | Time: 67m 25s | Average Loss:16.701534271240234
loss_valuate:1.22396472034004
	Validation Accuracy:0.5384103757898238 Test Accuracy:0.8155974460322286
...
Epoch: 8 | 85000 | Time: 75m 3s | Average Loss:15.525861740112305
loss_valuate:1.2596575871577431
	Validation Accuracy:0.5338654251191663 Test Accuracy:0.8189419276375798
...
Epoch: 9 | 85000 | Time: 82m 41s | Average Loss:15.818358421325684
loss_valuate:1.313194776424454
	Validation Accuracy:0.5368584414144774 Test Accuracy:0.8241106719367589
	 Train Accuracy:0.4994024986420424

可以发现,从第2代开始,验证集的Loss就在不断上升,但精确度变化不大,大致在0.53-0.54左右,如果早停的话,第1代就会停止,此时Test Accuracy为0.85,表现还是很好的

但是为什么验证集的精确度和训练集的精确度这么低?

每64代的Loss变化曲线如下:
【NLP】12 RNN神经网络应用在情绪分类NLP任务——LSTM(hidden, output)、Attention、Transform Encoder_第4张图片
为避免偶然性,再次训练:

...
Epoch: 0 | 85000 | Time: 6m 57s | Average Loss:27.515430450439453
loss_valuate:0.8839113093913431
	Validation Accuracy:0.5336597307221542 Test Accuracy:0.8000912131346914
...
Epoch: 1 | 85000 | Time: 14m 20s | Average Loss:25.438941955566406
loss_valuate:0.9057566658784911
	Validation Accuracy:0.5499054189384667 Test Accuracy:0.8399209486166008
...
Epoch: 2 | 85000 | Time: 21m 43s | Average Loss:23.79216194152832
loss_valuate:0.9505235861522199
	Validation Accuracy:0.5367753421608991 Test Accuracy:0.8511705685618729
...

可能验证集的选择不是很好

1.3 全部代码

import jieba
import torch
import torch.nn as nn
from gensim.models import KeyedVectors
import matplotlib.pyplot as plt
import time
import random
# import torch.nn.functional as F

word_vec = KeyedVectors.load('/mnt/Data1/ysc/TF-IDF/vectors.kv')

path_train = '/mnt/Data1/ysc/Data_Small.txt'
path_valuate = '/mnt/Data1/ysc/dmsc_v2_small.txt'
path_test = '/mnt/Data1/ysc/Chinese review datasets/test.txt'

train_dataset = []
train_label = []
valuate_dataset = []
valuate_label = []
test_dataset = []
test_label = []

# pos_cnt = 0
# neg_cnt = 0
with open(path_train, 'r', encoding='utf-8') as file:
    for line in file.readlines():
        if random.randint(1,100)>15:continue
        if line[-4:-1] == 'POS':
            train_label.append(torch.tensor([1]))
            # pos_cnt += 1
        elif line[-4:-1] == 'NEG':
            train_label.append(torch.tensor([0]))
            # neg_cnt += 1
        elif line[-4:-1] == 'ORM':
            continue
        train_dataset.append((' '.join(jieba.cut(line[:-5].strip('\n').strip(' '))).split(' ')))
print('训练集句子总数:{}'.format(len(train_dataset)))
# print('POS:{} NEG:{}'.format(pos_cnt, neg_cnt))

# pos_cnt = 0
# neg_cnt = 0
with open(path_valuate, 'r', encoding='utf-8') as file:
    for line in file.readlines():
        if line[-4:-1] == 'POS':
            if random.randint(0,1)==0:continue
            valuate_label.append(torch.tensor([1]))
            # pos_cnt += 1
        elif line[-4:-1] == 'NEG':
            valuate_label.append(torch.tensor([0]))
            # neg_cnt += 1
        elif line[-4:-1] == 'ORM':
            continue
        valuate_dataset.append((' '.join(jieba.cut(line[:-5].strip('\n').strip(' '))).split(' ')))
print('验证集句子总数:{}'.format(len(valuate_dataset)))
# print('POS:{} NEG:{}'.format(pos_cnt, neg_cnt))

# pos_cnt = 0
# neg_cnt = 0
with open(path_test, 'r', encoding='utf-8') as file:
    for line in file.readlines():
        if line[-4:-1] == 'POS':
            test_label.append(torch.tensor([1]))
            # pos_cnt += 1
        elif line[-4:-1] == 'NEG':
            test_label.append(torch.tensor([0]))
            # neg_cnt += 1
        elif line[-4:-1] == 'ORM':
            continue
        test_dataset.append((' '.join(jieba.cut(line[:-5].strip('\n').strip(' '))).split(' ')))
print('测试集句子总数:{}'.format(len(test_dataset)))
# print('POS:{} NEG:{}'.format(pos_cnt, neg_cnt))

device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
print(device)
# device = torch.device("cpu")


def embedding(sentence):
    sentence_embedding = []
    for word in sentence:
        try:
            sentence_embedding.append([word_vec[word]])
        except:
            continue
    return sentence_embedding


class Net(nn.Module):
    def __init__(self, hidden_size):
        super(Net, self).__init__()
        self.hidden_size = hidden_size
        # self.lstm = nn.LSTM(word_vec.vector_size, hidden_size, num_layers=20, bidirectional=True)
        self.rgu = nn.GRU(word_vec.vector_size, hidden_size, num_layers=2, bidirectional=True, dropout=0.1)

    def forward(self, input, hidden=None):
        embeds = torch.tensor(embedding(input), device=device)
        output = embeds
        output, hidden = self.rgu(output, hidden)
        return output, hidden  # seq_len, batch, num_directions * hidden_size

    # def initHidden(self):
    #     h_0 = torch.zeros(40, 1, self.hidden_size, device=device)  # num_layers * num_directions, batch, hidden_size
    #     c_0 = torch.zeros(40, 1, self.hidden_size, device=device)
    #     return (h_0, c_0)


class ClassificationModel(nn.Module):
    def __init__(self, rnn, device: torch.device, hidden_size):
        super().__init__()
        self.rnn = rnn
        self.device = device
        self.hidden_size = hidden_size
        self.fc = nn.Linear(4,1)
        self.emo = nn.Linear(hidden_size, 2)

    def forward(self, input, hidden=None):
        output, hidden = self.rnn(input, hidden)        # hidden.size() = 40 * 1 * 256
        # emo = self.emo(hidden[0].squeeze(1))
        # emo = hidden[0].permute(1,2,0)       # batch*hidden_size*(num_layer*bi)
        emo = hidden.permute(1, 2, 0)
        emo = emo.squeeze(0)
        emo = self.fc(emo)
        emo = self.emo(emo.permute(1,0))
        return  emo


rnn = Net(256)
model = ClassificationModel(rnn, device, 256)


def test():
    with torch.no_grad():
        # model.eval()
        # losses = 0
        cnt = 0
        right = 0
        for i in range(len(test_dataset)):
            try:
                # hidden = rnn.initHidden()
                classification = model(test_dataset[i])
                # loss = criterion(classification, test_label[i].cuda())     # .cuda()
                if classification.data.topk(1)[1].item() == test_label[i].cuda().item(): right += 1        # .cuda()
                # losses += loss
                cnt += 1
            except:
                continue
        # model.train()
        # return losses.item() / cnt, right / cnt
        return right / cnt

def train2():
    with torch.no_grad():
        # model.eval()
        # losses = 0
        cnt = 0
        right = 0
        for i in range(len(train_dataset)):
            try:
                # hidden = rnn.initHidden()
                classification = model(train_dataset[i])
                # loss = criterion(classification, valuate_label[i].cuda())      # .cuda()
                if classification.data.topk(1)[1].item() == valuate_label[i].cuda().item(): right += 1     # .cuda()
                # losses += loss
                cnt += 1
            except:
                continue
        # model.train()
        # return losses.item() / cnt, right / cnt
        return right / cnt

def valuate():
    with torch.no_grad():
        # model.eval()
        losses = 0
        cnt = 0
        right = 0
        for i in range(len(valuate_dataset)):
            try:
                # hidden = rnn.initHidden()
                classification = model(valuate_dataset[i])
                loss = criterion(classification, valuate_label[i].cuda())      # .cuda()
                if classification.data.topk(1)[1].item() == valuate_label[i].cuda().item(): right += 1     # .cuda()
                losses += loss
                cnt += 1
            except:
                continue
        # model.train()
        return losses.item() / cnt, right / cnt
        # return right / cnt


def epoch_time(start_time, end_time):
    elapsed_time = end_time - start_time
    elapsed_mins = int(elapsed_time / 60)
    elapsed_secs = int(elapsed_time - (elapsed_mins * 60))
    return elapsed_mins, elapsed_secs


loss_plot = []
def train():
    optimizer_model = torch.optim.Adam(model.parameters(), lr=1e-3)  # 学习率1e-3
    optimizer_rnn = torch.optim.Adam(rnn.parameters(), lr=1e-3)  # 学习率1e-3
    losses = 0
    cnt = 0
    max_iter = 2
    max_tmp = 99999999
    if torch.cuda.is_available() == True:
        model.cuda()
    # if True:
        model.train()
        start_time = time.time()
        loss_print = []
        optimizer_model.zero_grad()
        optimizer_rnn.zero_grad()
        for epoch in range(10):
            # for i in range(len(train_dataset)):
            # i = random.randint(0, len(train_dataset)-1)
            L = random.sample(range(0, len(train_dataset)), len(train_dataset))
            for i in L:
                try:
                # if True:
                #     hidden = rnn.initHidden()
                    classification = model(train_dataset[i])
                    loss = criterion(classification, train_label[i].cuda())    # .cuda()
                    losses += loss
                    cnt += 1
                except:
                    # print(dataset[i])
                    # print('?')
                    continue

                if cnt%64==0:
                    loss_plot.append(losses)
                    loss_print.append(losses)
                    losses.backward()
                    optimizer_model.step()
                    optimizer_rnn.step()
                    losses = 0
                    optimizer_model.zero_grad()
                    optimizer_rnn.zero_grad()

                if cnt%5000==0:
                    end_time = time.time()
                    epoch_mins, epoch_secs = epoch_time(start_time, end_time)
                    # print(f'Epoch: {epoch} | {cnt} | Time: {epoch_mins}m {epoch_secs}s | Average Loss:{sum(loss_plot)/len(loss_plot)}')
                    print(f'Epoch: {epoch} | {cnt} | Time: {epoch_mins}m {epoch_secs}s | Average Loss:{sum(loss_print)/len(loss_print)}')
                    loss_print.clear()

            # if cnt%2000==0:
            cnt = 0
            loss_valuate, acc_valuate = valuate()
            print('loss_valuate:{}'.format(loss_valuate))
            print('\tValidation Accuracy:{} Test Accuracy:{}'.format(acc_valuate, test()))
            # if True:
            #     cnt = 0
            #     # end_time = time.time()
            #     # epoch_mins, epoch_secs = epoch_time(start_time, end_time)
            #     # print(f'Epoch: {epoch + 1:02} | {i} | Time: {epoch_mins}m {epoch_secs}s | Loss:{losses}')
            #     # print('\tValidation Accuracy:{} Test Accuracy:{}'.format(valuate(), test()))
            #     loss_valuate, acc_valuate = valuate()
            #     if loss_valuate <= max_tmp:
            #         max_tmp = loss_valuate
            #         print('\tacc_valuate:{}'.format(acc_valuate))
            #         max_iter = 2
            #     else:
            #         max_iter -= 1
            #         if max_iter == 0:
            #             print('Finish training at {} epoch'.format(epoch))
            #             # print('\tTest Accuracy:{}'.format(test()))
            #             return




if __name__ == '__main__':
    criterion = nn.CrossEntropyLoss()
    criterion = criterion.cuda()       # .cuda()
    train()
    loss_valuate, acc_valuate = valuate()
    print('\t Train Accuracy:{}'.format(train2()))
    plt.plot(loss_plot)
    plt.show()

2. LSTM-output

又出现原因Linux服务器不能复制粘贴问题,解决办法:断开连接,在任务管理器中打开rdpclip.exe进程,重新连接即可

填充到MAX_LENGTH

之前是提取LSTM的hidden进行情绪分类,现在我们提取output进行分类,首先需要求得MAX_LENGTH,首先计算出测试集最大的长度:

tmp = []
for i in test_dataset:
    tmp.append(len(i))
print(max(tmp))
print(tmp.index(max(tmp)))
35
3201	# 前冲的前风挡与发动机舱盖设计扁平的一线式进气格栅硕大的镀铬标识以及独特的猫眼式前大灯组设计共同构成了动感十足的前脸造型 POS

现在将长度大于35的都舍弃掉,长度不足35的补零,问题:GRU得到的结果的seq_len=MAX_LENGTH,需不需要把原来补0的地方反填充?

之前训练的是579947中15%的句子,即86992条句子,由于这次剔除长度大于35的句子,所以选择大约30%的句子,其中90%作为训练集,10%作为验证集,结果:

训练集句子总数:80326
验证集句子总数:6632
测试集句子总数:6582
...
Epoch: 0 | 75000 | Time: 6m 26s | Average Loss:28.276050567626953
loss_valuate:0.42667644462573884
	Validation Accuracy:0.7881176113973931 Test Accuracy:0.8481301307388264
...
Epoch: 1 | 75000 | Time: 13m 43s | Average Loss:25.611026763916016
loss_valuate:0.4267350191345862
	Validation Accuracy:0.787056683843589 Test Accuracy:0.8619641228336881
...
Epoch: 2 | 75000 | Time: 21m 0s | Average Loss:22.443496704101562
loss_valuate:0.433334253455119
	Validation Accuracy:0.7916035162170355 Test Accuracy:0.8400729705077531
...
Epoch: 3 | 75000 | Time: 28m 17s | Average Loss:18.7727108001709
loss_valuate:0.4872552155364315
	Validation Accuracy:0.7990300090936647 Test Accuracy:0.8057160231073275
...
Epoch: 4 | 75000 | Time: 35m 34s | Average Loss:14.302709579467773
loss_valuate:0.5966615194116588
	Validation Accuracy:0.7932706880872992 Test Accuracy:0.7782000608087565
...
Epoch: 5 | 75000 | Time: 43m 27s | Average Loss:11.667118072509766
loss_valuate:0.6506671055622916
	Validation Accuracy:0.7867535616853591 Test Accuracy:0.7625418060200669
...
Epoch: 6 | 75000 | Time: 53m 5s | Average Loss:8.828022956848145
loss_valuate:0.7403560382881176
	Validation Accuracy:0.7885722946347378 Test Accuracy:0.8092125266038309
...
Epoch: 7 | 75000 | Time: 60m 23s | Average Loss:6.954474925994873
loss_valuate:0.8605225867450363
	Validation Accuracy:0.793422249166414 Test Accuracy:0.7911219215567041
...
Epoch: 8 | 75000 | Time: 67m 41s | Average Loss:6.85260009765625
loss_valuate:0.9329092089932556
	Validation Accuracy:0.7791755077296151 Test Accuracy:0.7490118577075099
...
Epoch: 9 | 75000 | Time: 74m 59s | Average Loss:6.265404224395752
loss_valuate:1.0107637227900501
	Validation Accuracy:0.7835707790239467 Test Accuracy:0.7987230161143205
	 Train Accuracy:0.5595580444982594

每5000个句子Average Loss变化曲线:
【NLP】12 RNN神经网络应用在情绪分类NLP任务——LSTM(hidden, output)、Attention、Transform Encoder_第5张图片
结果分析:loss_valuate在不断加大,Train Accuracy:0.5595580444982594很低,感觉又像过拟合,又像欠拟合,假如第一代就停止,则测试集正确率为0.848

增加梯度衰减和梯度修剪,每10000个句子判断验证集上的Loss是否最小,如果是则保存模型,不是则按gamma=0.5衰减梯度,运行5代停止

Epoch: 0 | 10000 | Time: 0m 53s | Average Loss:34.887184143066406
	Validation Accuracy:0.7559055118110236
	Best Test Accuracy:0.7780480389176041
Epoch: 0 | 20000 | Time: 2m 11s | Average Loss:33.639976501464844
	Validation Accuracy:0.7542071946888992
	Best Test Accuracy:0.7581331711766495
Epoch: 0 | 30000 | Time: 3m 28s | Average Loss:31.14021873474121
	Validation Accuracy:0.7642427049559981
	Best Test Accuracy:0.7908178777743995
Epoch: 0 | 50000 | Time: 5m 53s | Average Loss:28.4647274017334
	Validation Accuracy:0.7911069939786939
	Best Test Accuracy:0.8203101246579507
Epoch: 0 | 70000 | Time: 8m 16s | Average Loss:26.075702667236328
	Validation Accuracy:0.7957387679481241
	Best Test Accuracy:0.8409851018546671
Epoch: 0 | 80000 | Time: 9m 34s | Average Loss:27.208580017089844
	Validation Accuracy:0.7958931604137718
	Validation Accuracy:0.8040759610930986
	Test Accuracy:0.8250228032836728
Epoch: 1 | 80000 | Time: 18m 50s | Average Loss:23.543182373046875
	Validation Accuracy:0.797745870001544
	Validation Accuracy:0.7997529720549638
	Test Accuracy:0.8402249923989055
Epoch: 2 | 80000 | Time: 28m 5s | Average Loss:22.02412986755371
	Validation Accuracy:0.7969739076733056
	Validation Accuracy:0.7958931604137718
	Test Accuracy:0.8391608391608392
Epoch: 3 | 80000 | Time: 37m 20s | Average Loss:23.04869842529297
	Validation Accuracy:0.7975914775358962
	Validation Accuracy:0.798209047398487
	Test Accuracy:0.8382487078139252
Epoch: 4 | 80000 | Time: 46m 36s | Average Loss:22.402061462402344
	Validation Accuracy:0.7980546549328392
	Validation Accuracy:0.7999073645206114
	Test Accuracy:0.8371845545758589
	Test Accuracy:0.8380966859227729
	Train Accuracy:0.5482827660557523

【NLP】12 RNN神经网络应用在情绪分类NLP任务——LSTM(hidden, output)、Attention、Transform Encoder_第6张图片
下面把output[embeds .size()[0]:, :, :] = 0,并不能运行,应该是有梯度方向传播的地方不能直接这样吧,暂时先不管了

3. Attention

思路如下:
【NLP】12 RNN神经网络应用在情绪分类NLP任务——LSTM(hidden, output)、Attention、Transform Encoder_第7张图片

【NLP】12 RNN神经网络应用在情绪分类NLP任务——LSTM(hidden, output)、Attention、Transform Encoder_第8张图片

Epoch: 0 | 10000 | Time: 0m 20s | Average Loss:32.250694274902344
	Validation Accuracy:0.7360296387774005
	Best Test Accuracy:0.8006993006993007
Epoch: 0 | 20000 | Time: 0m 53s | Average Loss:29.19270896911621
	Validation Accuracy:0.7790984871874035
	Best Test Accuracy:0.828823350562481
Epoch: 0 | 30000 | Time: 1m 26s | Average Loss:28.382543563842773
	Validation Accuracy:0.7775548008644644
	Best Test Accuracy:0.8508665247795683
Epoch: 0 | 50000 | Time: 2m 26s | Average Loss:27.309593200683594
	Validation Accuracy:0.7757023772769374
	Best Test Accuracy:0.8376406202493158
Epoch: 0 | 60000 | Time: 3m 0s | Average Loss:26.23936653137207
	Validation Accuracy:0.7966965112689102
	Best Test Accuracy:0.830191547582852
Epoch: 0 | 70000 | Time: 3m 32s | Average Loss:26.756301879882812
	Validation Accuracy:0.7934547699907378
	Best Test Accuracy:0.8469139556096078
Epoch: 1 | 20000 | Time: 5m 14s | Average Loss:23.47141456604004
	Validation Accuracy:0.7957702994751467
	Best Test Accuracy:0.8435694740042566
...
	Validation Accuracy:0.7933004013584439
	Test Accuracy:0.8543630282760718
	Train Accuracy:0.5571870170015456

再运行一次程序,并保存模型:

Epoch: 0 | 10000 | Time: 0m 20s | Average Loss:31.835342407226562
	Validation Accuracy:0.7151947850380258
	Best Test Accuracy:0.8257829127394345
Epoch: 0 | 20000 | Time: 0m 54s | Average Loss:29.066537857055664
	Validation Accuracy:0.773552692844948
	Best Test Accuracy:0.823198540589845
Epoch: 0 | 30000 | Time: 1m 27s | Average Loss:28.290781021118164
	Validation Accuracy:0.7839515753530963
	Best Test Accuracy:0.8370325326847066
Epoch: 0 | 50000 | Time: 2m 28s | Average Loss:26.492918014526367
	Validation Accuracy:0.7963681514822287
	Best Test Accuracy:0.8155974460322286
Epoch: 0 | 60000 | Time: 3m 2s | Average Loss:27.036176681518555
	Validation Accuracy:0.7971441874902996
	Best Test Accuracy:0.8356643356643356
Epoch: 0 | 70000 | Time: 3m 35s | Average Loss:26.933622360229492
	Validation Accuracy:0.798541052304827
	Best Test Accuracy:0.842809364548495
	Validation Accuracy:0.8019556107403384
	Best Test Accuracy:0.8504104591061112
Test Accuracy:0.8242626938279112
Train Accuracy:0.5502637294446168

【NLP】12 RNN神经网络应用在情绪分类NLP任务——LSTM(hidden, output)、Attention、Transform Encoder_第9张图片
load模型运行报错:

RuntimeError: cuDNN error: CUDNN_STATUS_EXECUTION_FAILED

应该是cuDNN版本不一致,解决办法:

torch.backends.cudnn.enabled = False

中文字体显示报错:

findfont: Font family ['sans-serif'] not found. Falling back to DejaVu Sans.
RuntimeWarning: Glyph 36895 missing from current font.   font.set_text(s, 0, flags=flags)

终端python的环境下,查看字体路径,判断是否有SimHei(黑体)存在:

import matplotlib
print(matplotlib.matplotlib_fname())
/home/ysc/anaconda3/lib/python3.8/site-packages/matplotlib/mpl-data/matplotlibrc

在以上文件夹中并没有找到,在Windows的C:\Windows\Fonts的文件夹下,解决办法参考此和此

一个例子:

from matplotlib import font_manager as fm, rcParams
import matplotlib.pyplot as plt


plt.rcParams['font.sans-serif'] = ['SimHei']  # 显示中文标签
plt.rcParams['axes.unicode_minus'] = False

plt.plot([1,2,3])
plt.xlabel('服务器')
plt.show()

中文显示成功

图中添加annotate标注文字可见此,测试示例:‘但是通话质量真的很好因为我只用一张的卡所以暂时没发现那个用大卡爆音的问题’:
【NLP】12 RNN神经网络应用在情绪分类NLP任务——LSTM(hidden, output)、Attention、Transform Encoder_第10张图片
‘电话簿容量大’,这一句话判错了,应该focus在’大’上面:
【NLP】12 RNN神经网络应用在情绪分类NLP任务——LSTM(hidden, output)、Attention、Transform Encoder_第11张图片
‘功能比较全’:
【NLP】12 RNN神经网络应用在情绪分类NLP任务——LSTM(hidden, output)、Attention、Transform Encoder_第12张图片

‘虽然沃尔沃拥有如此惊人的强大动力’:
【NLP】12 RNN神经网络应用在情绪分类NLP任务——LSTM(hidden, output)、Attention、Transform Encoder_第13张图片
‘可以说拥有出色的影音效果’:
【NLP】12 RNN神经网络应用在情绪分类NLP任务——LSTM(hidden, output)、Attention、Transform Encoder_第14张图片

‘充电确实很慢’:
【NLP】12 RNN神经网络应用在情绪分类NLP任务——LSTM(hidden, output)、Attention、Transform Encoder_第15张图片
‘但是散热做的相当不错’:
【NLP】12 RNN神经网络应用在情绪分类NLP任务——LSTM(hidden, output)、Attention、Transform Encoder_第16张图片
‘照片很清晰’:
【NLP】12 RNN神经网络应用在情绪分类NLP任务——LSTM(hidden, output)、Attention、Transform Encoder_第17张图片
‘这部奇瑞风云最大的特点是造型漂亮’
【NLP】12 RNN神经网络应用在情绪分类NLP任务——LSTM(hidden, output)、Attention、Transform Encoder_第18张图片
‘的对焦准确性相当好’,这一句话也判错了,应该focus在’相当好’上面,而不是’对焦’:
【NLP】12 RNN神经网络应用在情绪分类NLP任务——LSTM(hidden, output)、Attention、Transform Encoder_第19张图片
看一下分类错误的句子:

亮度高	# 不好判断
它的听筒设计较小
速度及慢		# 错别字
为什么积于一个这么好功能的电话不支持功能和呢		# 不知所云
但由于的体积较小	# POS
总体来讲的声音效果算是中规中矩		# NEG
高感太夸张了		# NEG
不过也同意其他网友说的画质不是很油润鄙人在室内用的多灯光杂且色温很乱用光高手若觉得它很油润请你板砖啊		# NEG
哎系统虽好	# POS
...

4. Transformer

再看看测试集最大的长度:

len()=689的句子:
【NLP】12 RNN神经网络应用在情绪分类NLP任务——LSTM(hidden, output)、Attention、Transform Encoder_第20张图片
最大长度1795:
【NLP】12 RNN神经网络应用在情绪分类NLP任务——LSTM(hidden, output)、Attention、Transform Encoder_第21张图片
所以设计PositionalEncoding可以把最大长度设成2000?虽然前面也会筛一次长度超过35的

Epoch: 0 | 10000 | Time: 0m 26s | Average Loss:42.63115692138672
	Validation Accuracy:0.59113750571037
	Best Test Accuracy:0.7145028884159319
Epoch: 0 | 30000 | Time: 1m 43s | Average Loss:42.416709899902344
	Validation Accuracy:0.6100197959494442
	Best Test Accuracy:0.7178473700212831
Epoch: 0 | 40000 | Time: 2m 25s | Average Loss:42.32552719116211
	Validation Accuracy:0.650373077508756
	Best Test Accuracy:0.7137427789601702
Epoch: 0 | 70000 | Time: 4m 17s | Average Loss:40.882720947265625
	Validation Accuracy:0.6401705497182885
	Best Test Accuracy:0.7339616904834296
Epoch: 1 | 10000 | Time: 5m 44s | Average Loss:40.04580307006836
	Validation Accuracy:0.6602710522308513
	Best Test Accuracy:0.7386743691091517
Epoch: 1 | 40000 | Time: 7m 36s | Average Loss:39.11258316040039
	Validation Accuracy:0.6681894320085275
	Best Test Accuracy:0.7490118577075099
Epoch: 1 | 70000 | Time: 9m 27s | Average Loss:39.2961311340332
	Validation Accuracy:0.6738236637734125
	Best Test Accuracy:0.7436910915171785
Epoch: 2 | 80000 | Time: 14m 51s | Average Loss:39.47162628173828
	Validation Accuracy:0.6750418760469011
	Best Test Accuracy:0.7430830039525692
	Validation Accuracy:0.6761078117862037
	Best Test Accuracy:0.7383703253268471
	Test Accuracy:0.7415627850410459
	Train Accuracy:0.5259473443920256

【NLP】12 RNN神经网络应用在情绪分类NLP任务——LSTM(hidden, output)、Attention、Transform Encoder_第22张图片
发现测试集精确度不高,所以transformer_encoder得到的seq_len * 1 * 300的tensor,不能通过简单第一维求和、第二维softmax、再过一个tanh激活函数来实现,还是用之前填充和Attention方法分别一试

填充

Epoch: 0 | 10000 | Time: 0m 25s | Average Loss:249.44175720214844
	Validation Accuracy:0.6426361386138614
	Best Test Accuracy:0.5901489814533293
Epoch: 0 | 20000 | Time: 1m 7s | Average Loss:143.79180908203125
	Validation Accuracy:0.6208230198019802
	Best Test Accuracy:0.6351474612344178
Epoch: 0 | 30000 | Time: 1m 50s | Average Loss:98.39894104003906
	Validation Accuracy:0.6717202970297029
	Best Test Accuracy:0.5982061416844026
Epoch: 0 | 40000 | Time: 2m 43s | Average Loss:82.15373992919922
	Validation Accuracy:0.6709467821782178
	Best Test Accuracy:0.6644876862268166
Epoch: 0 | 60000 | Time: 3m 59s | Average Loss:49.74187088012695
	Validation Accuracy:0.6477413366336634
	Best Test Accuracy:0.7298570994223168
Epoch: 0 | 70000 | Time: 4m 40s | Average Loss:46.70456314086914
	Validation Accuracy:0.6842512376237624
	Best Test Accuracy:0.7444512009729402
Epoch: 0 | 80000 | Time: 5m 22s | Average Loss:43.52018737792969
	Validation Accuracy:0.6780631188118812
	Best Test Accuracy:0.7766798418972332
	Validation Accuracy:0.6913675742574258
	Best Test Accuracy:0.7684706597750076
Epoch: 1 | 10000 | Time: 6m 22s | Average Loss:39.705623626708984
	Validation Accuracy:0.7094678217821783
	Best Test Accuracy:0.7292490118577075
Epoch: 1 | 20000 | Time: 7m 5s | Average Loss:39.43376159667969
	Validation Accuracy:0.7148824257425742
	Best Test Accuracy:0.7713590757069018
Epoch: 1 | 40000 | Time: 8m 21s | Average Loss:34.8393440246582
	Validation Accuracy:0.7332920792079208
	Best Test Accuracy:0.7858011553663727
Epoch: 1 | 50000 | Time: 9m 3s | Average Loss:34.54415512084961
	Validation Accuracy:0.7504641089108911
	Best Test Accuracy:0.7342657342657343
	Validation Accuracy:0.7558787128712872
	Best Test Accuracy:0.8055640012161751
Test Accuracy:0.8142292490118577
Train Accuracy:0.5190969537652699

【NLP】12 RNN神经网络应用在情绪分类NLP任务——LSTM(hidden, output)、Attention、Transform Encoder_第23张图片
Attention

不能用Attention把,它就返回了一个output,没有隐层…

5. 全部代码

# 


import torch
import torch.nn as nn
import torch.nn.functional as F

import jieba
from gensim.models import KeyedVectors
import matplotlib.pyplot as plt
import time
import random

word_vec = KeyedVectors.load('vectors.kv')

path_data = '/mnt/Data1/ysc/Data_Small.txt'
path_test = '/mnt/Data1/ysc/Chinese review datasets/test.txt'

MAX_LENGTH = 35

train_dataset = []
train_label = []
valuate_dataset = []
valuate_label = []
test_dataset = []
test_label = []


with open(path_data, 'r', encoding='utf-8') as file:
    for line in file.readlines():
        if random.randint(1,1000)>1:continue        # 100/30
        tmp = ' '.join(jieba.cut(line[:-5].strip('\n').strip(' '))).split(' ')
        if len(tmp) > 35: continue
        if random.randint(1,100)>10:     # 10
            if line[-4:-1] == 'POS':
                train_label.append(torch.tensor([1]))
            elif line[-4:-1] == 'NEG':
                train_label.append(torch.tensor([0]))
            elif line[-4:-1] == 'ORM':
                continue
            train_dataset.append(tmp)
        else:
            if line[-4:-1] == 'POS':
                if random.randint(0, 1) == 0: continue
                valuate_label.append(torch.tensor([1]))
            elif line[-4:-1] == 'NEG':
                valuate_label.append(torch.tensor([0]))
            elif line[-4:-1] == 'ORM':
                continue
            valuate_dataset.append(tmp)

print('训练集句子总数:{}'.format(len(train_dataset)))
print('验证集句子总数:{}'.format(len(valuate_dataset)))


with open(path_test, 'r', encoding='utf-8') as file:
    for line in file.readlines():
        # if random.randint(1, 100) > 1: continue
        if line[-4:-1] == 'POS':
            test_label.append(torch.tensor([1]))
        elif line[-4:-1] == 'NEG':
            test_label.append(torch.tensor([0]))
        elif line[-4:-1] == 'ORM':
            continue
        test_dataset.append((' '.join(jieba.cut(line[:-5].strip('\n').strip(' '))).split(' ')))

print('测试集句子总数:{}'.format(len(test_dataset)))

device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
print(device)


def embedding(sentence):
    sentence_embedding = []
    for word in sentence:
        try:
            sentence_embedding.append([word_vec[word]])
        except:
            continue
    return sentence_embedding


class Net(nn.Module):
    def __init__(self, hidden_size):
        super(Net, self).__init__()
        self.hidden_size = hidden_size
        self.gru = nn.GRU(word_vec.vector_size, hidden_size, num_layers=2, bidirectional=True, dropout=0.1)
        self.fc = nn.Linear(2 * hidden_size * MAX_LENGTH, 2)

    def forward(self, input, hidden=None):
        embeds = torch.tensor(embedding(input), device=device)
        if len(embeds) == 0: return
        packed = torch.cat([embeds, torch.zeros(MAX_LENGTH - embeds.size()[0], 1, 300).cuda()], dim=0)
        output, hidden = self.gru(packed, hidden)       # output.size() = seq_len * 1 * (2*256), hidden.size() = 4 * 1 * 256

        # output[embeds .size()[0]:, :, :] = 0

        output = output.view(1,-1)
        emo = self.fc(output)
        return hidden, emo  # seq_len, batch, num_directions * hidden_size


class Attention(nn.Module):
    def __init__(self, hidden_size):
        super(Attention, self).__init__()
        self.hidden_size = hidden_size
        self.gru = nn.GRU(word_vec.vector_size, hidden_size, num_layers=2, bidirectional=True, dropout=0.1)
        self.fc1 = nn.Linear((2 *2 * hidden_size + 2 * hidden_size), 8)     # Attention dim = 8
        self.fc2 = nn.Linear(2 * hidden_size, 2)

    def forward(self, input, hidden=None):
        embeds = torch.tensor(embedding(input), device=device)
        if len(embeds) == 0: return
        # packed = torch.cat([embeds, torch.zeros(MAX_LENGTH - embeds.size()[0], 1, 300).cuda()], dim=0)
        output, hidden = self.gru(embeds, hidden)       # output.size() = seq_len * 1 * (2*256), hidden.size() = 4 * 1 * 256
        hidden = hidden.view(1, -1)
        hidden = hidden.repeat(1, output.size()[0], 1)      # 1 * seq_len * 1024
        output = output.permute(1, 0, 2)
        combine = torch.cat((output, hidden), dim=2)        # 1 * seq_len * 1536
        combine = self.fc1(combine)     # 1 * seq_len * 8
        combine = torch.tanh(combine)
        combine = torch.sum(combine, dim=2)     # 1 * seq_len
        attention = F.softmax(combine, dim=1)

        a = attention.unsqueeze(1)
        a_apply = a.bmm(output)
        emo = self.fc2(a_apply.squeeze(1))

        return attention, emo  # seq_len, batch, num_directions * hidden_size

import math
class PositionalEncoding(nn.Module):
    def __init__(self, d_model, dropout=0.1, max_len=5000):  # ninp, dropout
        super(PositionalEncoding, self).__init__()
        self.dropout = nn.Dropout(p=dropout)
        pe = torch.zeros(max_len, d_model)  # 5000 * 200
        position = torch.arange(0, max_len, dtype=torch.float).unsqueeze(1)  # [[0],[1],...[4999]] 5000 * 1
        div_term = torch.exp(torch.arange(0, d_model, 2).float() * (-math.log(
            10000.0) / d_model))  # e ^([0, 2,...,198] * -ln(10000)(-9.210340371976184) / 200) [1,0.912,...,(1.0965e-04)]
        pe[:, 0::2] = torch.sin(position * div_term)
        pe[:, 1::2] = torch.cos(position * div_term)
        pe = pe.unsqueeze(0).transpose(0, 1)  # 5000 * 1 * 200, 最长5000的序列,每个词由1 * 200的矩阵代表着不同的时间
        self.register_buffer('pe', pe)

    def forward(self, x):
        x = x + self.pe[:x.size()[0], :]  # torch.Size([35, 1, 200])
        return self.dropout(x)

from torch.nn import TransformerEncoder, TransformerEncoderLayer
class TransformerModel(nn.Module):
    def __init__(self, hidden_size, dropout=0.1):
        super(TransformerModel, self).__init__()
        self.hidden_size = hidden_size
        self.pos_encoder = PositionalEncoding(word_vec.vector_size, dropout)
        encoder_layers = TransformerEncoderLayer(word_vec.vector_size, 2, self.hidden_size, dropout)     # head = 2, dim = 256
        self.transformer_encoder = TransformerEncoder(encoder_layers, 2)        # layer = 2
        # self.decoder = nn.Linear(word_vec.vector_size, 2)
        self.decoder = nn.Linear(MAX_LENGTH * word_vec.vector_size, 2)
        self.init_weights()


    def init_weights(self):
        initrange = 0.5
        self.decoder.bias.data.zero_()
        self.decoder.weight.data.uniform_(-initrange, initrange)

    def forward(self, input):
        embeds = torch.tensor(embedding(input), device=device)
        if len(embeds) == 0: return
        src = embeds * math.sqrt(word_vec.vector_size)
        src = self.pos_encoder(src)     # seq_len * 1 * 300
        output = self.transformer_encoder(src)      # seq_len * 1 * 300
        # The effect is not good
        # output = torch.sum(output, dim=0)       # 1 * 300
        # output = F.softmax(output, dim=1)
        # output = torch.tanh(output)
        # emo = self.decoder(output)

        # padding zero
        output = torch.cat([output, torch.zeros(MAX_LENGTH - output.size()[0], 1, 300).cuda()], dim=0)
        output = output.view(1, -1)
        emo = self.decoder(output)

        # Attention

        return output, emo



# model = Net(256)
model = Attention(256)
# model = TransformerModel(256)


def test(best_model):
    with torch.no_grad():
        cnt = 0
        right = 0
        for i in range(len(test_dataset)):
            # if True:
            try:
                _, classification = best_model(test_dataset[i])
                if classification.data.topk(1)[1].item() == test_label[i].cuda().item(): right += 1        # .cuda()
                else:
                    print(''.join(test_dataset[i]))
                cnt += 1
            except:
                continue
        return right / cnt

def train2(best_model):
    with torch.no_grad():
        best_model.eval()
        cnt = 0
        right = 0
        for i in range(len(train_dataset)):
            try:
                _, classification = best_model(train_dataset[i])
                if classification.data.topk(1)[1].item() == valuate_label[i].cuda().item(): right += 1     # .cuda()
                cnt += 1
            except:
                continue
        return right / cnt

def valuate():
    with torch.no_grad():
        losses = 0
        cnt = 0
        right = 0
        for i in range(len(valuate_dataset)):
            try:
                _, classification = model(valuate_dataset[i])
                loss = criterion(classification, valuate_label[i].cuda())      # .cuda()
                if classification.data.topk(1)[1].item() == valuate_label[i].cuda().item(): right += 1     # .cuda()
                losses += loss
                cnt += 1
            except:
                continue
        return losses.item() / cnt, right / cnt
        # return right / cnt


def epoch_time(start_time, end_time):
    elapsed_time = end_time - start_time
    elapsed_mins = int(elapsed_time / 60)
    elapsed_secs = int(elapsed_time - (elapsed_mins * 60))
    return elapsed_mins, elapsed_secs


import matplotlib.pyplot as plt
import matplotlib.ticker as ticker
def showAttention(input_sentence, output_words, attentions, title):
    plt.rcParams['font.sans-serif'] = ['SimHei']  # 显示中文标签
    plt.rcParams['axes.unicode_minus'] = False

    # 用色条设置图形
    fig = plt.figure()
    ax = fig.add_subplot(111)
    cax = ax.matshow(attentions.numpy())       # colormap, cmap='bone'
    fig.colorbar(cax)
    # plt.title(title,verticalalignment='bottom')
    # 设置轴
    ax.set_xticklabels([''] + input_sentence, rotation=90)
    ax.set_yticklabels([''] + output_words)

    # Show label at every tick
    ax.xaxis.set_major_locator(ticker.MultipleLocator(1))
    ax.yaxis.set_major_locator(ticker.MultipleLocator(1))

    plt.show()




loss_plot = []
def train():
    optimizer_model = torch.optim.Adam(model.parameters(), lr=1e-3)  # 学习率1e-3
    scheduler = torch.optim.lr_scheduler.StepLR(optimizer_model, 1, gamma=0.9)
    losses = 0
    cnt = 0
    max_iter = 2
    max_tmp = 99999999
    if torch.cuda.is_available() == True:
        model.cuda()
        model.train()
        start_time = time.time()
        loss_print = []
        optimizer_model.zero_grad()
        for epoch in range(5):
            L = random.sample(range(0, len(train_dataset)), len(train_dataset))
            for i in L:
                try:
                # if True:
                    _, classification = model(train_dataset[i])
                    loss = criterion(classification, train_label[i].cuda())    # .cuda()
                    losses += loss
                    cnt += 1
                except:
                #     print('?')
                    continue

                if cnt % 64 == 0:
                    loss_print.append(losses)
                    losses.backward()
                    torch.nn.utils.clip_grad_norm_(model.parameters(), 0.25)  # 规定了最大不能超过的max_norm
                    optimizer_model.step()
                    losses = 0
                    optimizer_model.zero_grad()

                if cnt % 5000 == 0:
                    end_time = time.time()
                    epoch_mins, epoch_secs = epoch_time(start_time, end_time)
                    average_loss = sum(loss_print)/len(loss_print)
                    loss_plot.append(average_loss)
                    print(f'Epoch: {epoch} | {cnt} | Time: {epoch_mins}m {epoch_secs}s | Average Loss:{average_loss}')
                    loss_print.clear()

                if cnt % 10000 == 0:
                    loss_valuate, acc_valuate = valuate()
                    # print('loss_valuate:{}'.format(loss_valuate))
                    print('\tValidation Accuracy:{}'.format(acc_valuate))
                    if loss_valuate <= max_tmp:
                        max_tmp = loss_valuate
                        best_model = model
                        print('\tBest Test Accuracy:{}'.format(test(best_model)))
                    else:
                        scheduler.step()


            loss_valuate, acc_valuate = valuate()
            # print('loss_valuate:{}'.format(loss_valuate))
            print('\tValidation Accuracy:{}'.format(acc_valuate))
            if loss_valuate <= max_tmp:
                max_tmp = loss_valuate
                best_model = model
                print('\tBest Test Accuracy:{}'.format(test(best_model)))
            else:
                scheduler.step()
            cnt = 0

        print('Test Accuracy:{}'.format(test(best_model)))
        print('Train Accuracy:{}'.format(train2(best_model)))
        torch.save(model.state_dict(),'transformer_and_attention.pth')


def filter(sentence_list):
    new_list = []
    for word in sentence_list:
        if word in word_vec.vocab:
            new_list.append(word)
    return new_list

if __name__ == '__main__':
    # criterion = nn.CrossEntropyLoss()
    # criterion = criterion.cuda()       # .cuda()
    # train()
    # plt.plot(loss_plot)
    # plt.show()
    torch.backends.cudnn.enabled = False
    model.load_state_dict(torch.load('transformer_and_attention.pth'))

    print('Test Accuracy:{}'.format(test(model.cuda())))

    # str = '但是通话质量真的很好因为我只用一张的卡所以暂时没发现那个用大卡爆音的问题'
    # test_dataset = [' '.join(jieba.cut(str)).split(' ')]
    # test_label = [torch.tensor([1])]
    model = model.cuda()
    with torch.no_grad():
        for i in range(5):
            index = random.randint(0, len(test_dataset))
            # index = 0
            attention, classification = model(test_dataset[index])
            if test_label[index]==0:true_classification='NEG'
            elif test_label[index]==1:true_classification='POS'
            if classification.data.topk(1)[1].item() == test_label[index].cuda().item():
                true_classification = true_classification + '√'
            else:
                true_classification = true_classification + '×'

            print(''.join(test_dataset[index]))
            showAttention(filter(test_dataset[index]), [true_classification], attention.cpu(), ''.join(test_dataset[index]))
# 

6. 小结

基本把RNN在文本上的应用跑了一遍,包括LSTM取hidden、output,Attention,Transformer Encoder,其中Attention模型表现应该是最好的,能达到85%的准确率,可视化也做得很好,下一步看一下BERT

你可能感兴趣的:(【自然语言处理】,深度学习,lstm,自然语言处理,pytorch,人工智能)