【转载】CNN+LSTM+Attention机制预测收盘价

转载:CNN+LSTM+Attention机制预测收盘价
有意合作发文者,请联系原创作者的邮箱:
[email protected]
评论留言或者联系我的邮箱:[email protected]
数据由JQData本地量化金融数据支持
实验2:
使⽤历史前5个时刻的 open close high low volume money
预测当前时刻的收盘价,
即 [None, 5, 6] => [None, 1] # None是 batch_size
这一篇继续对 实验2的模型 进行拓展,增加Attention机制
先写点Attention的简单介绍
attention本质:
其实就是一个加权求和。
attention处理的问题,往往面临的是这样一个场景:
你有k个d维的特征向量hi(i=1,2,…,k)。现在你想整合这k个特征向量的信息,变成一个向量h∗(一般也是d维)。
solution:
1.一个最简单粗暴的办法就是这k个向量以element-wise取平均,得到新的向量,作为h∗,显然不够合理。
2.较为合理的办法就是,加权平均,即(αi为权重): 而attention所做的事情就是如何将αi(权重)合理的算出来。
详情参考:https://blog.csdn.net/BVL10101111/article/details/78470716
神经科学和计算神经科学中的neural processes已经广泛研究了注意力机制。视觉注意力机制是一个特别值得研究的方向:许多动物专注于视觉输入的特定部分,去计算适当的反映。这个原理对神经计算有很大的影响,因为我们需要选择最相关的信息,而不是使用所有可用的信息,所有可用信息中有很大一部分与计算神经元反映无关。一个类似于视觉专注于输入的特定部分,也就是注意力机制已经用于深度学习、语音识别、翻译、推理以及视觉识别。
模型架构

实验结果:结果看误差:MSE Test loss/误差: 0.0005358342003804944
【转载】CNN+LSTM+Attention机制预测收盘价_第1张图片
Here’s what you should take away from this section:

In the same way that 2D convnets perform well for processing visual patterns in 2D space, 1D convnets perform well for processing temporal patterns. They offer a faster alternative to RNNs on some problems, in particular NLP tasks.
Typically 1D convnets are structured much like their 2D equivalents from the world of computer vision: they consist of stacks of Conv1D layers and MaxPooling1D layers, eventually ending in a global pooling operation or flattening operation.
Because RNNs are extremely expensive for processing very long sequences, but 1D convnets are cheap, it can be a good idea to use a 1D convnet as a preprocessing step before a RNN, shortening the sequence and extracting useful representations for the RNN to process.
One useful and important concept that we will not cover in these pages is that of 1D convolution with dilated kernels.

你可能感兴趣的:(数据挖掘,机器学习,深度学习)