Keras自动调参

How to Grid Search Hyperparameters for Deep Learning Models in Python With Keras

  • 调参是一件费时费力的事情,Grid Search 能帮助我们减少调参的工作量,这篇文章将向你展示如何使用Grid Search方法在Keras代码上进行超参数最优选择。
  • 这篇文章整理自这里,另外一个可参考的代码请看这里

How to Use Keras Models in scikit-learn

  • 用Keras写出来的模型,可以通过KerasClassifier或者KerasRegressor进行打包供scikit-learn使用,利用fit()进行训练,例如:
def create_model():
    ...
    return model
model = KerasClassifier(build_fn=create_model)
  • 在打包时,可以提供模型需要的参数,这些参数包括:

    • 训练参数,包括Keras模型中fit(self, x, y, batch_size=32, epochs=10, verbose=1, callbacks=None, validation_split=0.0, validation_data=None, shuffle=True, class_weight=None, sample_weight=None, initial_epoch=0)中所有的参数。注意,参数名一定要一致。例如
        def create_model():
            ...
            return model
        model = KerasClassifier(build_fn=create_model, epochs=10)
    • 模型参数,例如dropout,kernel_size等等等等。例如
        def create_model(dropout_rate=0.0):
            ...
            return model
        model = KerasClassifier(build_fn=create_model, dropout_rate=0.2)
  • 更多参考 scikit-learn接口

How to Use Grid Search in scikit-learn

  • Grid search 是一种最优超参数的选择算法,实际就是暴力搜索。首先设定参数的候选值,然后穷举所有参数组合,根据评分机制,选择最好的那一组设置
  • 在scikit-learn中,类GridSearchCV可以为我们实现Grid Search。
  • 默认情况下,accuracyGridSearchCV的评分标准,可以通过scoring参数设置
  • param_grid是一个字典,表示为 [参数名:候选值],GridSearchCV将会组合这些参数进行评估最优。这些参数包括训练参数(epochs,batch_size等)以及模型参数(kernel_size, pool_size, num_filters等等等等)
  • n_jobs默认为1,表示将使用一个进程,将其设置为-1,表示将调用最大数量的进行(我在实验过程中,如果设置为-1,就在无限等待,所以以下代码n_jogs的值均为1)
  • GridSearchCV通过Cross validation来评估每个模型。
  • 更多参考 sklearn.model_selection.GridSearchCV
#举个例子
param_grid = dict(epochs=[10,20,30])
grid = GridSearchCV(estimator=model, param_grid=param_grid, n_jobs=1)
grid_result = grid.fit(X, Y)

Problem Description

  • 接下来,我们通过一些例子来说明如何使用GridSearchCV进行调参,这些例子将会使用一个小型数据集Pima Indians onset of diabetes classification dataset,这是一个二元分类问题,判断是否有糖尿病,数据集说明请看这里。
  • 下载数据集,重命名为’pima-indians-diabetes.csv’

How to Tune Batch Size and Number of Epochs

  • 在keras中可以使用EarlyStopping这里的回调函数来监控训练过程,因此Epochs参数的选择可能不是那么重要了。
  • 有些模型对batch_size参数还是很敏感的,所以对batch_size进行调参还是很有必要的
import numpy as np
from sklearn.model_selection import GridSearchCV
from keras import models
from keras import layers
from keras import optimizers
from keras.wrappers import scikit_learn
# 模型创建函数,KerasClassifier需要这个函数
def create_model():
    # create model
    model = models.Sequential()
    model.add(layers.Dense(12, activation='relu', input_shape=(8,)))
    model.add(layers.Dense(1, activation='sigmoid'))

    model.compile(optimizer='adam', loss='binary_crossentropy', metrics=['acc'])
    return model

# 导入数据
dataset = np.loadtxt('pima-indians-diabetes.csv', delimiter=',')
# 分割数据为输入X, 和目标Y
X = dataset[:, :8]
Y = dataset[:, 8]
# 归一化
means = np.mean(X, axis=0)
X -= means
stds = np.std(X, axis=0)
X /= stds

# 设置种子,为了可复现(这个无关紧要)
seed = 7
np.random.seed(seed)

# 创建模型
model = scikit_learn.KerasClassifier(build_fn=create_model, verbose=0)

# 设置参数候选值
batch_size = [8,16]
epochs = [10,50]

# 创建GridSearchCV,并训练
param_grid = dict(batch_size=batch_size, epochs=epochs)
grid = GridSearchCV(estimator=model, param_grid=param_grid, n_jobs=1)
grid_result = grid.fit(X, Y)

# 打印结果
print('Best: {} using {}'.format(grid_result.best_score_, grid_result.best_params_))
means = grid_result.cv_results_['mean_test_score']
stds = grid_result.cv_results_['std_test_score']
params = grid_result.cv_results_['params']

for mean, std, param in zip(means, stds, params):
    print("%f (%f) with: %r" % (mean, std, param))
Best: 0.7799479166666666 using {'batch_size': 8, 'epochs': 50}
0.763021 (0.041504) with: {'batch_size': 8, 'epochs': 10}
0.779948 (0.034104) with: {'batch_size': 8, 'epochs': 50}
0.744792 (0.030647) with: {'batch_size': 16, 'epochs': 10}
0.769531 (0.039836) with: {'batch_size': 16, 'epochs': 50}

How to Tune the Trainnig Optimization Algorithm

  • Keras提供了很多最优化算法,例如adam, sgd, rmsprop等等。更多参考优化器optimizers
  • 但是通常我们只会用其中某一种算法,不太可能去关注不同优化算法之间的区别,因此下面的例子就是为了举个例子,可能没有多少实际意义
# 模型创建函数,KerasClassifier需要这个函数
def create_model(optimizer='adam'):
    # create model
    model = models.Sequential()
    model.add(layers.Dense(12, activation='relu', input_shape=(8,)))
    model.add(layers.Dense(1, activation='sigmoid'))

    model.compile(optimizer=optimizer, loss='binary_crossentropy', metrics=['acc'])
    return model

# 导入数据
dataset = np.loadtxt('pima-indians-diabetes.csv', delimiter=',')
# 分割数据为输入X, 和目标Y
X = dataset[:, :8]
Y = dataset[:, 8]
# 归一化
means = np.mean(X, axis=0)
X -= means
stds = np.std(X, axis=0)
X /= stds

# 设置种子,为了可复现(这个无关紧要)
seed = 7
np.random.seed(seed)

# 创建模型
model = scikit_learn.KerasClassifier(build_fn=create_model, epochs=20, batch_size=8, verbose=0)

# 设置参数候选值
optimizer = ['sgd', 'rmsprop', 'adam', 'adagrad']

# 创建GridSearchCV,并训练
param_grid = dict(optimizer=optimizer)
grid = GridSearchCV(estimator=model, param_grid=param_grid, n_jobs=1)
grid_result = grid.fit(X, Y)

# 打印结果
print('Best: {} using {}'.format(grid_result.best_score_, grid_result.best_params_))
means = grid_result.cv_results_['mean_test_score']
stds = grid_result.cv_results_['std_test_score']
params = grid_result.cv_results_['params']

for mean, std, param in zip(means, stds, params):
    print("%f (%f) with: %r" % (mean, std, param))
Best: 0.7682291666666666 using {'optimizer': 'rmsprop'}
0.765625 (0.037603) with: {'optimizer': 'sgd'}
0.768229 (0.025582) with: {'optimizer': 'rmsprop'}
0.764323 (0.031466) with: {'optimizer': 'adam'}
0.760417 (0.034104) with: {'optimizer': 'adagrad'}

How to Tune Learning Rate and Momentum

  • Keras中SDG优化器支持衰减学习率,其他优化器对于学习率不是太敏感(比较自动),因此对于学习率的调参,可能也不是很重要。
  • 对于学习率和动量参数的调整只是针对SGD这一种优化器而言
# 模型创建函数,KerasClassifier需要这个函数
def create_model(learning_rate=0.01, momentum=0):
    # create model
    model = models.Sequential()
    model.add(layers.Dense(12, activation='relu', input_shape=(8,)))
    model.add(layers.Dense(1, activation='sigmoid'))

    optimizer = optimizers.SGD(lr=learning_rate, momentum=momentum)
    model.compile(optimizer=optimizer, loss='binary_crossentropy', metrics=['acc'])
    return model

# 导入数据
dataset = np.loadtxt('pima-indians-diabetes.csv', delimiter=',')
# 分割数据为输入X, 和目标Y
X = dataset[:, :8]
Y = dataset[:, 8]
# 归一化
means = np.mean(X, axis=0)
X -= means
stds = np.std(X, axis=0)
X /= stds

# 设置种子,为了可复现(这个无关紧要)
seed = 7
np.random.seed(seed)

# 创建模型
model = scikit_learn.KerasClassifier(build_fn=create_model, epochs=20, batch_size=8, verbose=0)

# 设置参数候选值
learning_rate = [0.001, 0.01]
momentum = [0.0, 0.2, 0.4]
# 创建GridSearchCV,并训练
param_grid = dict(learning_rate=learning_rate,momentum=momentum)
grid = GridSearchCV(estimator=model, param_grid=param_grid, n_jobs=1)
grid_result = grid.fit(X, Y)

# 打印结果
print('Best: {} using {}'.format(grid_result.best_score_, grid_result.best_params_))
means = grid_result.cv_results_['mean_test_score']
stds = grid_result.cv_results_['std_test_score']
params = grid_result.cv_results_['params']

for mean, std, param in zip(means, stds, params):
    print("%f (%f) with: %r" % (mean, std, param))
Best: 0.7747395833333334 using {'learning_rate': 0.01, 'momentum': 0.0}
0.640625 (0.030425) with: {'learning_rate': 0.001, 'momentum': 0.0}
0.692708 (0.025780) with: {'learning_rate': 0.001, 'momentum': 0.2}
0.686198 (0.017566) with: {'learning_rate': 0.001, 'momentum': 0.4}
0.774740 (0.035132) with: {'learning_rate': 0.01, 'momentum': 0.0}
0.766927 (0.021710) with: {'learning_rate': 0.01, 'momentum': 0.2}
0.769531 (0.033299) with: {'learning_rate': 0.01, 'momentum': 0.4}

How to Tune Network Weight Initialization

  • 对于较深的网络,权重的初始化方法非常的重要,一个好的初始化能够快速的收敛,并且获得更高的评分。
  • Keras提供了很多初始化方法,其中比较出名的包括何凯明方法,以及glorot方法。更多参考初始化方法
# 模型创建函数,KerasClassifier需要这个函数
def create_model(init_mode='random_uniform'):
    # create model
    model = models.Sequential()
    model.add(layers.Dense(12, activation='relu', kernel_initializer=init_mode,input_shape=(8,)))
    model.add(layers.Dense(1, activation='sigmoid', kernel_initializer=init_mode))

    model.compile(optimizer='adam', loss='binary_crossentropy', metrics=['acc'])
    return model

# 导入数据
dataset = np.loadtxt('pima-indians-diabetes.csv', delimiter=',')
# 分割数据为输入X, 和目标Y
X = dataset[:, :8]
Y = dataset[:, 8]
# 归一化
means = np.mean(X, axis=0)
X -= means
stds = np.std(X, axis=0)
X /= stds

# 设置种子,为了可复现(这个无关紧要)
seed = 7
np.random.seed(seed)

# 创建模型
model = scikit_learn.KerasClassifier(build_fn=create_model, epochs=20, batch_size=8, verbose=0)

# 设置参数候选值
init_mode = ['he_normal', 'he_uniform', 'glorot_normal', 'glorot_uniform', 'lecun_normal']
# 创建GridSearchCV,并训练
param_grid = dict(init_mode=init_mode)
grid = GridSearchCV(estimator=model, param_grid=param_grid, n_jobs=1)
grid_result = grid.fit(X, Y)

# 打印结果
print('Best: {} using {}'.format(grid_result.best_score_, grid_result.best_params_))
means = grid_result.cv_results_['mean_test_score']
stds = grid_result.cv_results_['std_test_score']
params = grid_result.cv_results_['params']

for mean, std, param in zip(means, stds, params):
    print("%f (%f) with: %r" % (mean, std, param))
Best: 0.7760416666666666 using {'init_mode': 'he_normal'}
0.776042 (0.024360) with: {'init_mode': 'he_normal'}
0.764323 (0.025976) with: {'init_mode': 'he_uniform'}
0.769531 (0.025315) with: {'init_mode': 'glorot_normal'}
0.761719 (0.035943) with: {'init_mode': 'glorot_uniform'}
0.763021 (0.038582) with: {'init_mode': 'lecun_normal'}

How to Tune the Neuron Activation Function

  • Keras同样提供了很多激活函数,不仅有简单sigmoid, relu, tanh, softmax等,还包括一些高级激活函数,例如LeakyRelu、PReLU等。更多参考高级激活层Advanced Activation, 激活函数Activations
  • 下面的例子中,我们只对隐层的激活函数进行调整
# 模型创建函数,KerasClassifier需要这个函数
def create_model(activation='relu'):
    # create model
    model = models.Sequential()
    model.add(layers.Dense(12, activation=activation,input_shape=(8,)))
    model.add(layers.Dense(1, activation='sigmoid'))

    model.compile(optimizer='adam', loss='binary_crossentropy', metrics=['acc'])
    return model

# 导入数据
dataset = np.loadtxt('pima-indians-diabetes.csv', delimiter=',')
# 分割数据为输入X, 和目标Y
X = dataset[:, :8]
Y = dataset[:, 8]
# 归一化
means = np.mean(X, axis=0)
X -= means
stds = np.std(X, axis=0)
X /= stds

# 设置种子,为了可复现(这个无关紧要)
seed = 7
np.random.seed(seed)

# 创建模型
model = scikit_learn.KerasClassifier(build_fn=create_model, epochs=20, batch_size=8, verbose=0)

# 设置参数候选值
activation = ['relu', 'tanh', 'softmax', 'linear', 'hard_sigmoid', 'softplus', 'selu']

# 创建GridSearchCV,并训练
param_grid = dict(activation=activation)
grid = GridSearchCV(estimator=model, param_grid=param_grid, n_jobs=1)
grid_result = grid.fit(X, Y)

# 打印结果
print('Best: {} using {}'.format(grid_result.best_score_, grid_result.best_params_))
means = grid_result.cv_results_['mean_test_score']
stds = grid_result.cv_results_['std_test_score']
params = grid_result.cv_results_['params']

for mean, std, param in zip(means, stds, params):
    print("%f (%f) with: %r" % (mean, std, param))
Best: 0.7786458333333334 using {'activation': 'softplus'}
0.773438 (0.035516) with: {'activation': 'relu'}
0.766927 (0.024774) with: {'activation': 'tanh'}
0.760417 (0.017566) with: {'activation': 'softmax'}
0.774740 (0.032106) with: {'activation': 'linear'}
0.760417 (0.033502) with: {'activation': 'hard_sigmoid'}
0.778646 (0.022628) with: {'activation': 'softplus'}
0.770833 (0.025780) with: {'activation': 'selu'}

How to Tune Dropout

  • Dropout能够有效地防止过拟合的问题,关于Keras中Dropout的使用可以参考 Dropout Regularization in Deep Learning Models With Keras
# 模型创建函数,KerasClassifier需要这个函数
def create_model(dropout=0.0):
    # create model
    model = models.Sequential()
    model.add(layers.Dense(12, activation='relu',input_shape=(8,)))
    model.add(layers.Dropout(dropout))
    model.add(layers.Dense(1, activation='sigmoid'))

    model.compile(optimizer='adam', loss='binary_crossentropy', metrics=['acc'])
    return model

# 导入数据
dataset = np.loadtxt('pima-indians-diabetes.csv', delimiter=',')
# 分割数据为输入X, 和目标Y
X = dataset[:, :8]
Y = dataset[:, 8]
# 归一化
means = np.mean(X, axis=0)
X -= means
stds = np.std(X, axis=0)
X /= stds

# 设置种子,为了可复现(这个无关紧要)
seed = 7
np.random.seed(seed)

# 创建模型
model = scikit_learn.KerasClassifier(build_fn=create_model, epochs=20, batch_size=8, verbose=0)

# 设置参数候选值
dropout = [0.2, 0.5]
# 创建GridSearchCV,并训练
param_grid = dict(dropout=dropout)
grid = GridSearchCV(estimator=model, param_grid=param_grid, n_jobs=1)
grid_result = grid.fit(X, Y)

# 打印结果
print('Best: {} using {}'.format(grid_result.best_score_, grid_result.best_params_))
means = grid_result.cv_results_['mean_test_score']
stds = grid_result.cv_results_['std_test_score']
params = grid_result.cv_results_['params']

for mean, std, param in zip(means, stds, params):
    print("%f (%f) with: %r" % (mean, std, param))
Best: 0.7708333333333334 using {'dropout': 0.5}
0.769531 (0.029232) with: {'dropout': 0.2}
0.770833 (0.032264) with: {'dropout': 0.5}

How to Tune the Number of Neurons in the Hidden Layer

  • 神经元个数影响着网络的表达能力,太多了容易过拟合,太少了会欠拟合,因此这是个比较难搞定的超参数
# 模型创建函数,KerasClassifier需要这个函数
def create_model(num_neurons=1):
    # create model
    model = models.Sequential()
    model.add(layers.Dense(num_neurons, activation='relu',input_shape=(8,)))
    model.add(layers.Dropout(0.5))
    model.add(layers.Dense(1, activation='sigmoid'))

    model.compile(optimizer='adam', loss='binary_crossentropy', metrics=['acc'])
    return model

# 导入数据
dataset = np.loadtxt('pima-indians-diabetes.csv', delimiter=',')
# 分割数据为输入X, 和目标Y
X = dataset[:, :8]
Y = dataset[:, 8]
# 归一化
means = np.mean(X, axis=0)
X -= means
stds = np.std(X, axis=0)
X /= stds

# 设置种子,为了可复现(这个无关紧要)
seed = 7
np.random.seed(seed)

# 创建模型
model = scikit_learn.KerasClassifier(build_fn=create_model, epochs=20, batch_size=8, verbose=0)

# 设置参数候选值
num_neurons = [1, 5, 10, 15, 20]

# 创建GridSearchCV,并训练
param_grid = dict(num_neurons=num_neurons)
grid = GridSearchCV(estimator=model, param_grid=param_grid, n_jobs=1)
grid_result = grid.fit(X, Y)

# 打印结果
print('Best: {} using {}'.format(grid_result.best_score_, grid_result.best_params_))
means = grid_result.cv_results_['mean_test_score']
stds = grid_result.cv_results_['std_test_score']
params = grid_result.cv_results_['params']

for mean, std, param in zip(means, stds, params):
    print("%f (%f) with: %r" % (mean, std, param))
Best: 0.7708333333333334 using {'num_neurons': 10}
0.651042 (0.024774) with: {'num_neurons': 1}
0.757812 (0.019918) with: {'num_neurons': 5}
0.770833 (0.038450) with: {'num_neurons': 10}
0.769531 (0.027251) with: {'num_neurons': 15}
0.764323 (0.032734) with: {'num_neurons': 20}

你可能感兴趣的:(keras,deep-learning)