《Neural Paraphrase Generation with Stacked Residual LSTM Networks》阅读笔记

论文链接:http://www.sadidhasan.com/sadid-NPG.pdf

过去的paraphrase generation方法:基于hand-written规则或者词典,或者统计机器学习原则。本文主要的贡献是a stacked residual LSTM network,给LSTM层之间增加了residual connections。可以在训练深度LSTMs时更高效。在三个数据集上评估:PPDB,WikiAnswers和MSCOCO。评价结果表明本模型在BLEU,METEOR,TER,and an embedding-based similarity metric上评价都优于seq2seq,attention-based和双向LSTM。


Introduction:

paraphrasing方面的研究主要致力于解决三个问题:1)recognition(辨别两个文本是否意义相同)2)extraction(给定一个输入文本,提取其paraphrase实例)3)generation(给定一个输入,生成a reference paraphrase)

(咦。。。看这篇文章有一个新姿势。。。翻译一下下面这句话:在生成目标序列时,每个新词的生成都依赖于模型和之前生成的词,生成第一个词时依赖于source sequence的特殊符号“EOS”)

While producing the target sequence, the generation of each new word depends on the model and the preceding generated word. Generation of the first word in the target sequence depends on the special "EOS" (end of sentence) token appended to the source sequence.

最好的decoded target往往是分数最高的,使用beam size(表示本宝宝不太懂这个beam size,准备看完这篇之后直接去撸引文)

多层LSTM:

多层lstm

l是层号

Sutskever等提出的stacking technique,所有LSTM层的隐状态都是全连接的。在本文中, all but the first layer input at time step t is passed from the hidden state of the previous layer htl,where l denotes the layer.


《Neural Paraphrase Generation with Stacked Residual LSTM Networks》阅读笔记_第1张图片

Stacked Residual LSTM:

When stacking multiple layers of neurons, the network often suffers through a degradation problem. The degradation problem arises due to the low convergence rate of training error and is different from the vanishing gradient problem.

当堆叠多层神经单元时,网络经常会出现degradation的问题,主要由于训练误差收敛率低,与梯度消失问题不同。这个问题可以通过residual connections解决。

xi表示的是layer i+1的输入。


总结一下:在用seq2seq的方法去解决paraphrase generation的问题,多层LSTM会在训练时难收敛,所以用residual connections去解决,就是每一层的隐状态会加

你可能感兴趣的:(《Neural Paraphrase Generation with Stacked Residual LSTM Networks》阅读笔记)