LSTM的神奇之处

前言

LSTM神经网络代表长期短期记忆,是一种特殊类型的递归神经网络,最近在机器学习界引起了很多关注。

简而言之,LSTM网络内部具有一些上下文状态单元,它们充当长期或短期存储器单元。

LSTM网络的输出由这些单元的状态调制而成。当我们的神经网络需要依赖于输入的历史背景而不是仅仅依赖于最后的输入进行预测时,这是一个非常重要的属性。

举个简单的例子,设想我们想要预测一个序列的下一个数字:6 -> 7 -> 8 -> ?。我们期望下一个输出为9(x+1)。但是,如果我们提供这个序列:2 - > 4 - > 8 - > ?,我们希望得到16(2x)。

尽管在两个示例中,最后的数字都是8,但是当我们考虑先前值的上下文信息而不仅仅是最后一个值时会有不同的预测结果。

LSTM原理

LSTM网络通过集成信息从一个步骤流向下一个步骤的循环来设法保持输入的上下文信息。这些循环使得循环神经网络看起来很神奇。但是,如果我们在阅读这篇文章时考虑一下,你就会根据你对前面几个词的理解来理解之后的每个词。你并不会
抛弃之前了解的所有信息并且重新开始理解新的单词。同样的,LSTM在预测时也会考虑之前的输入信息。

另一方面,时间越长,下一个输出依赖于非常旧的输入的可能性就越小。时间依赖距离本身也是要学习的上下文信息,LSTM网络通过遗忘门学习何时记住和何时忘记来处理这一点。简单来说,
如果遗忘门的乘法因子为0.9,那么在10个时间步长后,该因子变为0.9^10=0.348(或者65%的信息被遗忘),30个时间步长后变为0.04(96%的信息被遗忘)。

在本文中,我不会太多讲述LSTM的技术细节。有需要了解的请看这里。相反,我将在这里介绍LSTM的不同应用程序的编程思路。

LSTM用法的示例

文字生成

生成文本可以转换为LSTM任务,其中生成每个字母时考虑到所有先前生成的字母。实际上,在讲解LSTM网络时,字母表可以是第一个学习如何使LSTM学习东西的编程示例。
Andrej Karpathy,展示了LSTM生成文本的不同示例,论证了生成文本的不同取决于提供的训练集。这些是一些自动生成的文本:

  • Shakespeare:
PANDARUS:
Alas, I think he shall be come approached and the day
When little srain would be attain’d into being never fed,
And who is but a chain and subjects of his death,
I should not sleep.
Second Senator:
They are away this miseries, produced upon my soul,
Breaking and strongly should be buried, when I perish
The earth and thoughts of many states.
  • Wikipedia:
Naturalism and decision for the majority of Arab countries’ capitalide was grounded
by the Irish language by [[John Clair]], [[An Imperial Japanese Revolt]], associated
with Guangzham’s sovereignty. His generals were the powerful ruler of the Portugal
in the [[Protestant Immineners]], which could be said to be directly in Cantonese
Communication, which followed a ceremony and set inspired prison, training. The
emperor travelled back to [[Antioch, Perth, October 25|21]] to note, the Kingdom
of Costa Rica, unsuccessful fashioned the [[Thrales]], [[Cynth’s Dajoard]], known
in western [[Scotland]], near Italy to the conquest of India with the conflict.
  • XML:
 
 15900676
 2002–08–03T18:14:12Z
 
 Paris
 23
 
 
 Automated conversion
 
  • Obama:
SEED: War on terror
Good everybody. Thank you very much. God bless the United States of America, and has already began with the world’s gathering their health insurance.
It’s about hard-earned for our efforts that are not continued.
We are all the assumptionion to the streets of the Americas that we are still for everybody and destruction.
We are doing a lot of this.
I know that someone would be prefered to their children to take a million insurance company. We’re watching their people and continued to find ourselves with Republicans — to give up on these challenges and despite the challenges of our country. In the last two years, we must recognise that our borders have access from the world. We’re continuing that this day of things that the United States will clean up it´s allies and prosperity to stand up enough to be a sanctions that we made their faith, and for the country of the Internet to Osama bin Laden.
Thank you. God bless you. Good morning, everybody. And May God loss man. Thank you very much. Thank you very much, everybody.

生成手写体

下图是一些使用LSTM自动生成的手写体文字。

对于现场演示,并自动生成LSTM-手写文本,请访问此页面。

生成音乐

由于音乐就像文本一样是一系列音符(而不是字符),因此LSTM也可以通过考虑先前播放的音符(或音符组合)来生成音乐。在这里,
您可以找到有关如何在midi文件上训练LSTM的有趣讲解。您也可以欣赏以下生成的音乐(来自古典音乐训练集):

音乐1

音乐2

音乐3

文本翻译

文本语言翻译可以看作是序列到序列的映射。一组研究人员与Nvidia合作发布了有关如何训练LSTM以完成此类任务的详细信息(第1部分,第2部分,第3部分)。

简而言之,他们创建了一个带编码器的神经网络,将文本压缩为更高的抽象矢量表示,并使用解码器将其解码回目标语言。

生成图片讲解

最后,LSTM网络最令人印象深刻的用途是从输入图像生成描述图像内容的文本标题。微软的研究在这方面取得了很大进展。以下是其结果的一些示例演示:

您可以在这里尝试他们的在线演示:https://www.captionbot.ai/。

结论

总而言之,LSTM网络背后的真正魔力在于它们几乎达到人类级别的序列生成质量(你甚至可以在这里查看它们的源代码!)

参考

  • Understanding LSTM
  • Combination of research papers
  • RNN effectiveness
  • Deep learning 4j
  • Handwriting recognition
  • Text generation
  • Music generation

你可能感兴趣的:(机器学习,人工智能)