这里目前为止只是博主阅读Keras中LSTM源码的草稿笔记,内容不全,没有清晰的逻辑,只是堆砌个人想法。
参考文献:
1. keras的官方相关文档
2. LSTM原论文
3. keras的RNN源码
Recurrent是LSTM的父类(实际是通过SimpleRNN间接继承),定义所有RNNs的统一接口。
implementation: one of {0, 1, or 2}.
If set to 0, the RNN will use an implementation that uses fewer, larger matrix products, thus running faster on CPU but consuming more memory. If set to 1, the RNN will use more matrix products, but smaller ones, thus running slower (may actually be faster on GPU) while consuming less memory. If set to 2 (LSTM/GRU only), the RNN will combine the input gate, the forget gate and the output gate into a single matrix, enabling more time-efficient parallelization on the GPU. Note: RNN dropout must be shared for all gates, resulting in a slightly reduced regularization.
博主主要使用GPU加速且不在意内存的代码,所以通常设置implementation=2,源码阅读也主要集中在implementation=2的部分。
weights: list of Numpy arrays to set as initial weights.
The list should have 3 elements, of shapes: [(input_dim, output_dim), (output_dim, output_dim), (output_dim,)]`.
Activation function to use for the recurrent step.
注意: 默认值是’hard_sigmoid’,而原论文中用的’sigmoid’(关于hard_sigmoid 和sigmiod的比较请参考What is hard sigmoid in artificial neural networks? Why is it faster than standard sigmoid? Are there any disadvantages over the standard sigmoid?)。
1 . 初始化
self.kernel = self.add_weight(shape=(self.input_dim, self.units * 4),name='kernel',initializer=self.kernel_initializer,regularizer=self.kernel_regularize,constraint=self.kernel_constraint)
2 . 分块意义
self.kernel_i = self.kernel[:, :self.units]
self.kernel_f = self.kernel[:, self.units: self.units * 2]
self.kernel_c = self.kernel[:, self.units * 2: self.units * 3]
self.kernel_o = self.kernel[:, self.units * 3:]
3 . kernel是用于和输入x做乘法的矩阵
1 . 初始化:
self.recurrent_kernel = self.add_weight(
shape=(self.units, self.units * 4),
name='recurrent_kernel',
initializer=self.recurrent_initializer,
regularizer=self.recurrent_regularizer,
constraint=self.recurrent_constraint)
2 . 分块意义
self.recurrent_kernel_i = self.recurrent_kernel[:, :self.units]
self.recurrent_kernel_f = self.recurrent_kernel[:, self.units: self.units * 2]
self.recurrent_kernel_c = self.recurrent_kernel[:, self.units * 2: self.units * 3]
self.recurrent_kernel_o = self.recurrent_kernel[:, self.units * 3:]
3 . recurrent_kernel是用于和前一时刻隐层输出h做乘法的矩阵
迭代部分代码如下:
if self.implementation == 2:
z = K.dot(inputs * dp_mask[0], self.kernel)
z += K.dot(h_tm1 * rec_dp_mask[0], self.recurrent_kernel)
if self.use_bias:
z = K.bias_add(z, self.bias)
z0 = z[:, :self.units]
z1 = z[:, self.units: 2 * self.units]
z2 = z[:, 2 * self.units: 3 * self.units]
z3 = z[:, 3 * self.units:]
i = self.recurrent_activation(z0)
f = self.recurrent_activation(z1)
c = f * c_tm1 + i * self.activation(z2)
o = self.recurrent_activation(z3)
h = o * self.activation(c)
可见activation 作用于i,f,o的生成,recurrent_activation作用于g的生成以及在c的输出部分做微调。如果要模拟原论文的话,应该设置activation = tanh, recurrent_activation = sigmoid。
if isinstance(inputs, list):
initial_state = inputs[1:]
inputs = inputs[0]
elif initial_state is not None:
pass
elif self.stateful:
initial_state = self.states
else:
initial_state = self.get_initial_state(inputs)
5 . Recurrent的get_initial_state函数,里面就是返回全0的初始状态。