keras回调函数之EarlyStopping,ReduceLROnPlateau和ModelCheckpoint共同参数之monitor

monitor——被监测的量

keras.callbacks.EarlyStopping(monitor='val_loss', min_delta=0, patience=0, verbose=0, mode='auto', baseline=None, restore_best_weights=False)

EarlyStopping的作用,是当被监测的量不再提升或下降,则停止训练防止过拟合。

keras.callbacks.ReduceLROnPlateau(monitor='val_loss', factor=0.1, patience=10, verbose=0, mode='auto', min_delta=0.0001, cooldown=0, min_lr=0)

ReduceLROnPlateau的作用是当被监测的量不再提升或下降,则降低学习率,当学习停止时,模型总是会受益于降低 2-10 倍的学习速率。

keras.callbacks.ModelCheckpoint(filepath, monitor='val_loss', verbose=0, save_best_only=False, save_weights_only=False, mode='auto', period=1)

ModelCheckpoint的作用是训练中保存被监测的量最高或最低的模型。

他们仨有一个共同的参数monitor,即上面所述被监测的量。通常这个monitor被设置为’val_acc’或者‘val_loss’,'val_acc’指的是验证集的准确率,‘val_loss’指的是验证集的损失函数,百度了很久,没有查到其他的设置方式,后来翻阅keras源码发现monitor是可以自己设定的,与模型的metrics挂钩:

def score(y_true,y_pred):
    return 1.0 / (1 + K.mean(K.abs(y_true-y_pred)))
def get_model():
    bert_model = load_trained_model_from_checkpoint(config_path, checkpoint_path)
    for l in bert_model.layers:
        l.trainable = True
    T1 = Input(shape=(None,))
    T2 = Input(shape=(None,))
    output = Dense(4, activation='softmax')(T)
    model = Model([T1, T2], output)
    model.compile(
        loss='categorical_crossentropy',
        optimizer=Adam(1e-5),  
        metrics=[score]
    )
    model.summary()
    return model

以上面的模型为例,metrics是我自己定义的score函数,那么EarlyStopping,ReduceLROnPlateau和ModelCheckpoint的monitor就可以设置为’val_score’:

early_stopping = EarlyStopping(monitor='val_score', patience=3,mode='max')
plateau = ReduceLROnPlateau(monitor="val_score", verbose=1, mode='max', factor=0.5, patience=2)
checkpoint = ModelCheckpoint(filepath, monitor='val_score',
                             verbose=2, save_best_only=True, mode='max',save_weights_only=True)

你可能感兴趣的:(keras,深度学习,回调函数,monitor)