RNN背后的想法是利用顺序信息。 在传统的神经网络中,我们假设所有输入(和输出)彼此独立。 但对于许多非常糟糕的任务而言。 如果你想预测句子中的下一个单词,你最好知道它前面有哪些单词。 RNN被称为循环,因为它们对序列的每个元素执行相同的任务,输出取决于先前的计算。 考虑RNN的另一种方式是它们具有“记忆”,其捕获到目前为止已经计算的信息。 理论上,RNN可以利用任意长序列中的信息,但在实践中,它们仅限于回顾几个步骤(稍后将详细介绍)。 这是典型的RNN的样子:
上图显示了RNN正在展开(或展开)到完整网络中。 通过展开我们只是意味着我们为整个序列写出网络。 例如,如果我们关心的序列是5个单词的句子,则网络将展开为5层神经网络,每个单词一层。 管理RNN中发生的计算的公式如下:
RNN在许多NLP任务中取得了巨大成功。 在这一点上,我应该提到最常用的RNN类型是LSTM,它在捕获长期依赖性方面要比vanilla RNN好得多。 但不要担心,LSTM与我们将在本教程中开发的RNN基本相同,它们只是采用不同的方式来计算隐藏状态。 我们将在稍后的文章中更详细地介绍LSTM。
训练RNN与训练传统神经网络类似。 我们也使用反向传播算法,但有点扭曲。 由于参数由网络中的所有时间步骤共享,因此每个输出的梯度不仅取决于当前时间步长的计算,还取决于先前的时间步长。 例如,为了计算 t = 4 t=4 t=4时的梯度,我们需要反向传播3个步骤并总结梯度。 这称为反向传播时间(BPTT)。 现在,请注意这样一个事实,即由于所谓的消失/爆炸梯度问题,使用BPTT训练的RNN难以学习长期依赖性(例如相距很远的步骤之间的依赖性)。 存在一些处理这些问题的机制,并且某些类型的RNN(如LSTM)是专门为解决这些问题而设计的。
多年来,研究人员开发了更复杂类型的RNN来处理RNN模型的一些缺点。
双向RNN
基于以下思想:时间@ t t t处的输出可能不仅取决于序列中的先前元素,还取决于未来元素。 例如,要预测序列中缺少的单词,您需要查看左侧和右侧上下文。 双向RNN非常简单。 它们只是两个堆叠在一起的RNN。 然后基于两个RNN的隐藏状态计算输出。
深度(双向)RNN
类似于双向RNN,只是我们现在每个时间步长有多个层。 在实践中,这为我们提供了更高的学习能力(但我们还需要大量的训练数据)。
长短期内存网络 - 通常只称为“LSTM” - 是一种特殊的RNN,能够学习长期依赖性。 它们由Hochreiter&Schmidhuber(1997)介绍,并在以下工作中被许多人提炼和推广。他们在各种各样的问题上工作得非常好,现在被广泛使用。
LSTM明确旨在避免长期依赖性问题。 长时间记住信息实际上是他们的默认行为,而不是他们难以学习的东西!
所有递归神经网络都具有神经网络的重复模块链的形式。 在标准RNN中,该重复模块将具有非常简单的结构,例如单个tanh层。
LSTM也具有这种类似链的结构,但重复模块具有不同的结构。 有四个,而不是一个神经网络层,以一种非常特殊的方式进行交互。
LSTM稍微有点戏剧性的变化是由Cho等人引入的门控循环单元(GRU)。(2014)。 它将遗忘和输入门组合成一个“更新门”。它还合并了单元状态和隐藏状态,并进行了一些其他更改。 由此产生的模型比标准LSTM模型简单,并且越来越受欢迎。
# -*- coding: utf-8 -*-
#TextRNN: 1. embeddding layer, 2.Bi-LSTM layer, 3.concat output, 4.FC layer, 5.softmax
import tensorflow as tf
from tensorflow.contrib import rnn
import numpy as np
class TextRNN:
def __init__(self,num_classes, learning_rate, batch_size, decay_steps, decay_rate,sequence_length,
vocab_size,embed_size,is_training,initializer=tf.random_normal_initializer(stddev=0.1)):
"""init all hyperparameter here"""
# set hyperparamter
self.num_classes = num_classes
self.batch_size = batch_size
self.sequence_length=sequence_length
self.vocab_size=vocab_size
self.embed_size=embed_size
self.hidden_size=embed_size
self.is_training=is_training
self.learning_rate=learning_rate
self.initializer=initializer
self.num_sampled=20
# add placeholder (X,label)
self.input_x = tf.placeholder(tf.int32, [None, self.sequence_length], name="input_x") # X
self.input_y = tf.placeholder(tf.int32,[None], name="input_y") # y [None,num_classes]
self.dropout_keep_prob=tf.placeholder(tf.float32,name="dropout_keep_prob")
self.global_step = tf.Variable(0, trainable=False, name="Global_Step")
self.epoch_step=tf.Variable(0,trainable=False,name="Epoch_Step")
self.epoch_increment=tf.assign(self.epoch_step,tf.add(self.epoch_step,tf.constant(1)))
self.decay_steps, self.decay_rate = decay_steps, decay_rate
self.instantiate_weights()
self.logits = self.inference() #[None, self.label_size]. main computation graph is here.
if not is_training:
return
self.loss_val = self.loss() #-->self.loss_nce()
self.train_op = self.train()
self.predictions = tf.argmax(self.logits, axis=1, name="predictions") # shape:[None,]
correct_prediction = tf.equal(tf.cast(self.predictions,tf.int32), self.input_y) #tf.argmax(self.logits, 1)-->[batch_size]
self.accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32), name="Accuracy") # shape=()
def instantiate_weights(self):
"""define all weights here"""
with tf.name_scope("embedding"): # embedding matrix
self.Embedding = tf.get_variable("Embedding",shape=[self.vocab_size, self.embed_size],initializer=self.initializer) #[vocab_size,embed_size] tf.random_uniform([self.vocab_size, self.embed_size],-1.0,1.0)
self.W_projection = tf.get_variable("W_projection",shape=[self.hidden_size*2, self.num_classes],initializer=self.initializer) #[embed_size,label_size]
self.b_projection = tf.get_variable("b_projection",shape=[self.num_classes]) #[label_size]
def inference(self):
"""main computation graph here: 1. embeddding layer, 2.Bi-LSTM layer, 3.concat, 4.FC layer 5.softmax """
#1.get emebedding of words in the sentence
self.embedded_words = tf.nn.embedding_lookup(self.Embedding,self.input_x) #shape:[None,sentence_length,embed_size]
#2. Bi-lstm layer
# define lstm cess:get lstm cell output
lstm_fw_cell=rnn.BasicLSTMCell(self.hidden_size) #forward direction cell
lstm_bw_cell=rnn.BasicLSTMCell(self.hidden_size) #backward direction cell
if self.dropout_keep_prob is not None:
lstm_fw_cell=rnn.DropoutWrapper(lstm_fw_cell,output_keep_prob=self.dropout_keep_prob)
lstm_bw_cell=rnn.DropoutWrapper(lstm_bw_cell,output_keep_prob=self.dropout_keep_prob)
# bidirectional_dynamic_rnn: input: [batch_size, max_time, input_size]
# output: A tuple (outputs, output_states)
# where:outputs: A tuple (output_fw, output_bw) containing the forward and the backward rnn output `Tensor`.
outputs,_=tf.nn.bidirectional_dynamic_rnn(lstm_fw_cell,lstm_bw_cell,self.embedded_words,dtype=tf.float32) #[batch_size,sequence_length,hidden_size] #creates a dynamic bidirectional recurrent neural network
print("outputs:===>",outputs) #outputs:(, ))
#3. concat output
output_rnn=tf.concat(outputs,axis=2) #[batch_size,sequence_length,hidden_size*2]
#self.output_rnn_last=tf.reduce_mean(output_rnn,axis=1) #[batch_size,hidden_size*2]
self.output_rnn_last=output_rnn[:,-1,:] ##[batch_size,hidden_size*2] #TODO
print("output_rnn_last:", self.output_rnn_last) #
#4. logits(use linear layer)
with tf.name_scope("output"): #inputs: A `Tensor` of shape `[batch_size, dim]`. The forward activations of the input network.
logits = tf.matmul(self.output_rnn_last, self.W_projection) + self.b_projection # [batch_size,num_classes]
return logits
def loss(self,l2_lambda=0.0001):
with tf.name_scope("loss"):
#input: `logits` and `labels` must have the same shape `[batch_size, num_classes]`
#output: A 1-D `Tensor` of length `batch_size` of the same type as `logits` with the softmax cross entropy loss.
losses = tf.nn.sparse_softmax_cross_entropy_with_logits(labels=self.input_y, logits=self.logits);#sigmoid_cross_entropy_with_logits.#losses=tf.nn.softmax_cross_entropy_with_logits(labels=self.input_y,logits=self.logits)
#print("1.sparse_softmax_cross_entropy_with_logits.losses:",losses) # shape=(?,)
loss=tf.reduce_mean(losses)#print("2.loss.loss:", loss) #shape=()
l2_losses = tf.add_n([tf.nn.l2_loss(v) for v in tf.trainable_variables() if 'bias' not in v.name]) * l2_lambda
loss=loss+l2_losses
return loss
def loss_nce(self,l2_lambda=0.0001): #0.0001-->0.001
"""calculate loss using (NCE)cross entropy here"""
# Compute the average NCE loss for the batch.
# tf.nce_loss automatically draws a new sample of the negative labels each
# time we evaluate the loss.
if self.is_training: #training
#labels=tf.reshape(self.input_y,[-1]) #[batch_size,1]------>[batch_size,]
labels=tf.expand_dims(self.input_y,1) #[batch_size,]----->[batch_size,1]
loss = tf.reduce_mean( #inputs: A `Tensor` of shape `[batch_size, dim]`. The forward activations of the input network.
tf.nn.nce_loss(weights=tf.transpose(self.W_projection),#[hidden_size*2, num_classes]--->[num_classes,hidden_size*2]. nce_weights:A `Tensor` of shape `[num_classes, dim].O.K.
biases=self.b_projection, #[label_size]. nce_biases:A `Tensor` of shape `[num_classes]`.
labels=labels, #[batch_size,1]. train_labels, # A `Tensor` of type `int64` and shape `[batch_size,num_true]`. The target classes.
inputs=self.output_rnn_last,# [batch_size,hidden_size*2] #A `Tensor` of shape `[batch_size, dim]`. The forward activations of the input network.
num_sampled=self.num_sampled, #scalar. 100
num_classes=self.num_classes,partition_strategy="div")) #scalar. 1999
l2_losses = tf.add_n([tf.nn.l2_loss(v) for v in tf.trainable_variables() if 'bias' not in v.name]) * l2_lambda
loss = loss + l2_losses
return loss
def train(self):
"""based on the loss, use SGD to update parameter"""
learning_rate = tf.train.exponential_decay(self.learning_rate, self.global_step, self.decay_steps,self.decay_rate, staircase=True)
train_op = tf.contrib.layers.optimize_loss(self.loss_val, global_step=self.global_step,learning_rate=learning_rate, optimizer="Adam")
return train_op
#test started
def test():
#below is a function test; if you use this for text classifiction, you need to tranform sentence to indices of vocabulary first. then feed data to the graph.
num_classes=10
learning_rate=0.01
batch_size=8
decay_steps=1000
decay_rate=0.9
sequence_length=5
vocab_size=10000
embed_size=100
is_training=True
dropout_keep_prob=1#0.5
textRNN=TextRNN(num_classes, learning_rate, batch_size, decay_steps, decay_rate,sequence_length,vocab_size,embed_size,is_training)
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
for i in range(100):
input_x=np.zeros((batch_size,sequence_length)) #[None, self.sequence_length]
input_y=input_y=np.array([1,0,1,1,1,2,1,1]) #np.zeros((batch_size),dtype=np.int32) #[None, self.sequence_length]
loss,acc,predict,_=sess.run([textRNN.loss_val,textRNN.accuracy,textRNN.predictions,textRNN.train_op],feed_dict={textRNN.input_x:input_x,textRNN.input_y:input_y,textRNN.dropout_keep_prob:dropout_keep_prob})
print("loss:",loss,"acc:",acc,"label:",input_y,"prediction:",predict)
test()
# coding=utf-8
from keras import Input, Model
from keras.layers import Embedding, Dense, Concatenate, Conv1D, Bidirectional, CuDNNLSTM, GlobalAveragePooling1D, GlobalMaxPooling1D
class RCNNVariant(object):
"""Variant of RCNN.
Base on structure of RCNN, we do some improvement:
1. Ignore the shift for left/right context.
2. Use Bidirectional LSTM/GRU to encode context.
3. Use Multi-CNN to represent the semantic vectors.
4. Use ReLU instead of Tanh.
5. Use both AveragePooling and MaxPooling.
"""
def __init__(self, maxlen, max_features, embedding_dims,
class_num=1,
last_activation='sigmoid'):
self.maxlen = maxlen
self.max_features = max_features
self.embedding_dims = embedding_dims
self.class_num = class_num
self.last_activation = last_activation
def get_model(self):
input = Input((self.maxlen,))
embedding = Embedding(self.max_features, self.embedding_dims, input_length=self.maxlen)(input)
x_context = Bidirectional(CuDNNLSTM(128, return_sequences=True))(embedding)
x = Concatenate()([embedding, x_context])
convs = []
for kernel_size in range(1, 5):
conv = Conv1D(128, kernel_size, activation='relu')(x)
convs.append(conv)
poolings = [GlobalAveragePooling1D()(conv) for conv in convs] + [GlobalMaxPooling1D()(conv) for conv in convs]
x = Concatenate()(poolings)
output = Dense(self.class_num, activation=self.last_activation)(x)
model = Model(inputs=input, outputs=output)
return model