通过结合TF的TensorBoard将网络结构以及运行时状态可视化
from keras.callbacks import TensorBoard
model.fit([encoder_input_data, decoder_input_data], decoder_target_data,
batch_size=batch_size,
epochs=epochs,
validation_split=0.1,
callbacks=[predictEpochCallback,TensorBoard(log_dir='./log')])
运行TensorBoard查看日志中的运行信息:
tensorboard --logdir=./log
class PredictEpochCallback(Callback):
def on_train_begin(self, logs={}):
print("begin train epoch")
def on_epoch_end(self, epoch, logs={}):
for seq_index in range(10):
# Take one sequence (part of the training set)
# for trying out decoding.
input_seq = encoder_input_data[seq_index: seq_index + 1]
print('input_seq.shape')
print(input_seq.shape)
decoded_sentence = decode_sequence(input_seq)
print('epoch i='+str(epoch)+' input='+input_texts[seq_index]+' decode output='+decoded_sentence)
# save whole model
print('save s2s h5')
model.save('s2s.h5.'+str(epoch))
model_json = model.to_json()
with open("model.json."+str(epoch), "w") as json_file:
print('save s2s json')
json_file.write(model_json)
当学习停滞时,减少2倍或10倍的学习率常常能获得较好的效果。该回调函数检测指标的情况,如果在patience个epoch中看不到模型性能提升,则减少学习率
参数使用:reduce_lr_callback = ReduceLROnPlateau(monitor='loss', factor=0.5,
patience=1, min_lr=0.00001)
参考:http://keras-cn.readthedocs.io/en/latest/other/callbacks/