GRU
keras.layers.recurrent.GRU(input_dim, output_dim=128, init='glorot_uniform', inner_init='orthogonal', activation='sigmoid', inner_activation='hard_sigmoid', weights=None, truncate_gradient=-1, return_sequences=False)
Gated Recurrent Unit - Cho et al. 2014.
(nb_samples, timesteps, input_dim)
.return_sequences
:3D 张量形如:(nb_samples, timesteps, output_dim)
.(nb_samples, output_dim)
.keras.layers.recurrent.LSTM(input_dim, output_dim=128, init='glorot_uniform', inner_init='orthogonal', forget_bias_init='one', activation='tanh', inner_activation='hard_sigmoid', weights=None, truncate_gradient=-1, return_sequences=False)
Long Short-Term Memory unit - Hochreiter et al. 1997
(nb_samples, timesteps, input_dim)
.return_sequences
:3D 张量形如:(nb_samples, timesteps, output_dim)
.(nb_samples, output_dim)
.keras.layers.recurrent.JZS1(input_dim, output_dim=128, init='glorot_uniform', inner_init='orthogonal', activation='tanh', inner_activation='sigmoid', weights=None, truncate_gradient=-1, return_sequences=False)
全连接的 RNN 其中输出被重回输入。不是特别有用,仅供参考。
(nb_samples, timesteps, input_dim)
.return_sequences
:3D 张量形如:(nb_samples, timesteps, output_dim)
.(nb_samples, output_dim)
.[(input_dim, output_dim), (output_di,, output_dim), (output_dim, )]
keras.layers.recurrent.SimpleDeepRNN(input_dim, output_dim, depth=3, init='glorot_uniform', inner_init='orthogonal', activation='sigmoid', inner_activation='hard_sigmoid', weights=None, truncate_gradient=-1, return_sequences=False)
全连接的 RNN 其中多个时间步的输出重回输入中(使用 depth 参数来控制步数)。
output = activation( W.x_t + b + inner_activation(U_1.h_tm1) + inner_activation(U_2.h_tm2) + ... )
也不是常用的模型,仅供参考。
(nb_samples, timesteps, input_dim)
.return_sequences
:3D 张量形如:(nb_samples, timesteps, output_dim)
.(nb_samples, output_dim)
.[(input_dim, output_dim), (output_di,, output_dim), (output_dim, )]