目录
1. 案例介绍
2. 代码实现
代码一:为模型创建输入
代码二:构建模型
代码三:检验模型性能
为便于讲解seq2seq模型用例,我们采用一篇研究论文中所使用的注释预料库的文本内容。
论文:《Learning Phrase Representations using RNN Encoder–Decoder for Statistical Machine Translation》,即“开发基准语料库,以支持从医疗病例报告中自动提取与药物有关的不良反应”
词向量模型:我们使用开源模型skip-gram,该模型由NLPLab(http://evexdb.org/pmresources/vec-spacemodels/wikipedia-pubmed-and-PMC-w2v.bin)提供,该模型对所有的PubMed摘要和PMC全文(408万个单词)进行了训练。skip-gram模型的输出是一个具有200维度的字向量集。
语料库:论文中使用的ADE语料库包括三个文件,DRUG-AE.rel、DRUG-DOSE.rel、DRUG-NEG.txt,我们将利用文件DRUG-AE.rel,其提供了药物与不良反应之间的关系。DRUG-AE.rel文件的格式如下,列与列之间由管道分隔符“|”隔开:
列-1:PubMed-ID
列-2:句子
列-3:不良反应
列-4:不良反应在“文档级别”的起始位移
列-5:不良反应在“文档级别”的结束位移
列-6:药物
列-7:药物在“文档级别”的起始位移
列-8:药物在“文档级别”的结束位移
#!/usr/bin/env python3
# -*- coding:utf-8 -*-
# Author: Hou-Hou
"""
《deep learning for natural language processing creating neural networks with python》
"""
import re
import numpy as np
import nltk
from string import punctuation # 找出字符串中的所有的标点(针对英文)
from gensim.models import KeyedVectors
import keras
import tensorflow
import copy
from keras.preprocessing.sequence import pad_sequences
# 检查版本
print(keras.__version__) # 2.2.4
print(tensorflow.__version__) # 1.7.0
def create_input():
# 加载预训练的词向量
EMBEDDING_FILE = 'wikipedia-pubmed-and-PMC-w2v.bin'
word2vec = KeyedVectors.load_word2vec_format(EMBEDDING_FILE,binary=True)
print('Found %s word vectors of word2vec' % len(word2vec.vocab))
TEXT_FILE = 'DRUG-AE.rel'
# 为模型创建输入
f = open(TEXT_FILE, 'r')
input_data_ae = []
op_labels_ae = []
sentences = []
for each_line in f.readlines():
sent_list = np.zeros([0, 200])
labels = np.zeros([0, 3])
tokens = each_line.split('|')
sent = tokens[1]
if sent in sentences:
continue
sentences.append(sent)
begin_offset = int(tokens[3])
end_offset = int(tokens[4])
mid_offset = range(begin_offset+1, end_offset)
word_tokens = nltk.word_tokenize(sent)
offset = 0
for each_token in word_tokens:
offset = sent.find(each_token, offset)
offset1 = copy.deepcopy(offset)
offset += len(each_token)
if each_token in punctuation or re.search(r'\d', each_token):
continue # 若each_token为标点符号或数字,则跳过
each_token = each_token.lower()
each_token = re.sub("[^A-Za-z\-]+", "", each_token)
if each_token in word2vec.wv.vocab:
# 获取训练好后所有的词:model.vocab或model.wv.vocab
new_word = word2vec.word_vec(each_token) # 若单词在词汇表中,将其转换为向量
if offset1 == begin_offset:
sent_list = np.append(sent_list, np.array([new_word]), axis=0)
labels = np.append(labels, np.array([[0,0,1]]), axis=0)
elif offset == end_offset or offset in mid_offset:
sent_list = np.append(sent_list, np.array([new_word]), axis=0)
labels = np.append(labels, np.array([[0,1,0]]), axis=0)
else:
sent_list = np.append(sent_list, np.array([new_word]), axis=0)
labels = np.append(labels, np.array([[1,0,0]]), axis=0)
input_data_ae.append(sent_list)
op_labels_ae.append(labels)
input_data_ae = np.array(input_data_ae)
op_labels_ae = np.array(op_labels_ae)
# 给输入文本添加填充项:最大长度为30
input_data_ae = pad_sequences(input_data_ae, maxlen=30, dtype='float64', padding='post')
op_labels_ae = pad_sequences(op_labels_ae, maxlen=30, dtype='float64', padding='post')
return input_data_ae, op_labels_ae
(1)copy.copy() 浅拷贝:拷贝了最外围的对象本身,内部的元素都只是拷贝了一个引用而已。也就是,把对象复制一遍,但是该对象中引用的其他对象我不复制
copy.deepcopy() 深拷贝:外围和内部元素都进行了拷贝对象本身,而不是引用。也就是,把对象复制一遍,并且该对象中引用的其他对象我也复制(2)find() 方法:检测字符串中是否包含子字符串 str ,如果指定 beg(开始) 和 end(结束) 范围,则检查是否包含在指定范围内。如果包含子字符串返回开始的索引值,否则返回 - 1。
(3)re.sub(pattern, repl, string, count=0, flags=0)
pattern:表示正则表达式中的模式字符串;
repl:被替换的字符串(既可以是字符串,也可以是函数);
string:要被处理的,要被替换的字符串;
count:匹配的次数, 默认是全部替换
定义模型体系结构,我们使用双向LSTM网络的一个隐藏层,它有300个隐藏单元;除此之外,还使用一个时间分布稠密层,将在每个时间步上应用一个全连接的稠密层,按时间步分别得到输出。
#!/usr/bin/env python3
# -*- coding:utf-8 -*-
# Author: Hou-Hou
import datetime
import os
from keras.layers import Dense, Input, LSTM, Dropout,Bidirectional, TimeDistributed
from keras.models import Model
from keras.layers.normalization import BatchNormalization
from keras.callbacks import EarlyStopping, ModelCheckpoint
from create_input import *
input_data_ae, op_labels_ae = create_input()
# 创建训练、验证数据集:训练=4000, 验证=271
x_train = input_data_ae[:4000]
x_test = input_data_ae[4000:]
y_train = input_data_ae[:4000]
y_test = input_data_ae[4000:]
batch = 1
# 创建网络结构
def build_model():
xin = Input(batch_shape=(None, 30, 200), dtype='float')
seq = Bidirectional(LSTM(300, return_state=True), merge_mode='concat')(xin)
# 双向RNN包装器,merge_mode:前向和后向RNN输出的结合方式
mlp1 = Dropout(0.2)(seq)
mlp2 = TimeDistributed(Dense(60, activation='softmax'))(mlp1) # 该包装器可以把一个层应用到输入的每一个时间步上
mlp3 = Dropout(0.2)(mlp2)
mlp4 = TimeDistributed(Dense(3, activation='softmax'))(mlp3)
model = Model(inputs=xin, outputs=mlp4)
model.compile(optimizer='Adam', loss='categorical_crossentropy')
# 训练
model.fit(x_train, y_train,
batch_size=batch,
epochs=50,
validation_data=(x_test, y_test))
save_fname = os.path.join('./', '%s-e%s-3.h5' % (datetime.datetime.now().strftime('%Y%m%d-%H%M%S'), str(50)))
model.save(save_fname)
return model
model = build_model()
# 预测
val_pred = model.predict(x_test, batch_size=batch)
labels = []
for i in range(len(val_pred)):
b = np.zeros_like(val_pred[i])
b[np.arange(len(val_pred[i])), val_pred[i].argmax(1)]=1
labels.append(b)
print('val_pred=', val_pred.shape)
训练结果:
# 检验模型性能
from sklearn.metrics import f1_score, precision_score, recall_score
score = []
f1 = []
precision = []
recall = []
point=[]
for i in range(len(y_test)):
if(f1_score(labels[i], y_test[i], average='weighted')>0.6):
point.append(i)
score.append(f1_score(labels[i], y_test[i], average='weighted'))
precision.append(precision_score(labels[i], y_test[i], average='weighted'))
recall.append(recall_score(labels[i], y_test[i], average='weighted'))
print(len(point)/len(labels)*100)
print(np.mean(score))
print(np.mean(precision))
print(np.mean(recall))
输出结果:
完!