归一化对MLP的影响

以波士顿房价数据为例

# 波士顿房价数据
from keras.datasets import boston_housing
(x_train, y_train), (x_test, y_test)=boston_housing.load_data()

如果不归一化,数据的基本统计量:
归一化对MLP的影响_第1张图片

随便建一个MLP

def keras_r2(y_true,y_pred):
    y_mean=K.mean(y_true)
    # ssreg=K.sum((y_pred-y_mean)**2)
    sstotal=K.sum((y_true-y_mean)**2) # denominator
    ssres=K.sum((y_true-y_pred)**2)  # numerator
    score = 1-(ssres/sstotal)
    return score
    
# 模型
model = Sequential()
model.add(Dense(64, activation='relu', input_dim=13))
model.add(Dropout(0.5))
model.add(Dense(64, activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(1, activation='relu'))

optimizer = Adam()
model.compile(loss='mse',
              optimizer=optimizer,
              metrics=[keras_r2])

训练5000次

model.fit(x_train, y_train,
          epochs=5000,
          batch_size=128)
model.evaluate(x_test, y_test, batch_size=128)

eval_score=0.588

做rescale归一化

# rescale
from sklearn.preprocessing import MinMaxScaler
scaler=MinMaxScaler()
scaler.fit(x_train)
x_train2=scaler.transform(x_train)
x_test2=scaler.transform(x_test)

eval_score=0.799

做standardlization

eval_score=0.837

你可能感兴趣的:(数据分析)