tf.nn.dynamic_rnn
使用指定的RNNCell单元创建一个循环神经网络,对输入执行完全动态展开。
tf.nn.dynamic_rnn(
cell,
inputs,
sequence_length=None,
initial_state=None,
dtype=None,
parallel_iterations=None,
swap_memory=False,
time_major=False,
scope=None
)
参数说明:
返回:
一个(outputs,state)对,其中:
注意:如果cell.output_size是整数或TensorShape对象的元组,则输出将是具有与cell.output_size相同结构的元组,包含的张量的形状也同cell.output_size。
代码实例1:
batch_size=10 #批处理大小
hidden_size=100 #隐藏层神经元
max_time=40 #最大时间步长
depth=400 #输入层神经元数量,如词向量维度
input_data=tf.Variable(tf.random_normal([max_time,batch_size,depth]))
# create a BasicRNNCell
rnn_cell = tf.nn.rnn_cell.BasicRNNCell(hidden_size)
# 'outputs' is a tensor of shape [batch_size, max_time, cell_state_size]
# defining initial state
initial_state = rnn_cell.zero_state(batch_size, dtype=tf.float32)
# 'state' is a tensor of shape [batch_size, cell_state_size]
outputs, state = tf.nn.dynamic_rnn(rnn_cell, input_data,
initial_state=initial_state,
dtype=tf.float32,time_major=True)
print(outputs.shape) #(40, 10, 100)
print(state.shape) #(10, 100)
代码实例2:
batch_size=10 #批处理大小
hidden_size=100 #隐藏层神经元
max_time=40 #最大时间步长
depth=400 #输入层神经元数量,如词向量维度
input_data=tf.Variable(tf.random_normal([max_time,batch_size,depth]))
lstm_cell = tf.nn.rnn_cell.BasicLSTMCell(hidden_size)
# 'outputs' is a tensor of shape [batch_size, max_time, cell_state_size]
# defining initial state
initial_state = lstm_cel.zero_state(batch_size, dtype=tf.float32)
# 'state' is a tensor of shape [batch_size, cell_state_size]
outputs, state = tf.nn.dynamic_rnn(lstm_cell, input_data,
initial_state=initial_state,
dtype=tf.float32,time_major=True)
print(outputs.shape) #(40, 10, 100)
print(state.c) #Tensor("rnn_4/while/Exit_3:0", shape=(10, 100), dtype=float32)
print(state.h) #Tensor("rnn_4/while/Exit_4:0", shape=(10, 100), dtype=float32)