TensorFlow之dropout解决overfitting问题
今天我们会来聊聊机器学习中的过拟合 overfitting 现象, 和解决过拟合的方法.
Overfitting 也被称为过度学习,过度拟合。 它是机器学习中常见的问题。 举个Classification(分类)的例子。
图中黑色曲线是正常模型,绿色曲线就是overfitting模型。尽管绿色曲线很精确的区分了所有的训练数据,但是并没有描述数据的整体特征,对新测试数据的适应性较差。
举个Regression (回归)的例子,
第三条曲线存在overfitting问题,尽管它经过了所有的训练点,但是不能很好的反应数据的趋势,预测能力严重不足。 TensorFlow提供了强大的dropout方法来解决overfitting问题。
"""
Please note, this code is only for python 3+. If you are using python 2+, please modify the code accordingly.
"""
import tensorflow as tf
from sklearn.datasets import load_digits
#from sklearn.cross_validation import train_test_split #旧版本已经删除
from sklearn.model_selection import train_test_split #换成这句
from sklearn.preprocessing import LabelBinarizer
# load data
digits = load_digits()
X = digits.data
y = digits.target
y = LabelBinarizer().fit_transform(y)
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=.3)
def add_layer(inputs, in_size, out_size, layer_name, activation_function=None, ):
# add one more layer and return the output of this layer
Weights = tf.Variable(tf.random_normal([in_size, out_size]))
biases = tf.Variable(tf.zeros([1, out_size]) + 0.1, )
Wx_plus_b = tf.matmul(inputs, Weights) + biases
# here to dropout
#重要点是这里!!!
#keep_prob是保留概率,即我们要保留的结果所占比例,
#它作为一个placeholder,在run时传入, 当keep_prob=1的时候,
#相当于100%保留,也就是dropout没有起作用。
Wx_plus_b = tf.nn.dropout(Wx_plus_b, keep_prob)
if activation_function is None:
outputs = Wx_plus_b
else:
outputs = activation_function(Wx_plus_b, )
tf.summary.histogram(layer_name + '/outputs', outputs)
return outputs
# define placeholder for inputs to network
keep_prob = tf.placeholder(tf.float32)
xs = tf.placeholder(tf.float32, [None, 64]) # 8x8
ys = tf.placeholder(tf.float32, [None, 10])
# add output layer
l1 = add_layer(xs, 64, 60, 'l1', activation_function=tf.nn.tanh)
prediction = add_layer(l1, 60, 10, 'l2', activation_function=tf.nn.softmax)
# the loss between prediction and real data
cross_entropy = tf.reduce_mean(-tf.reduce_sum(ys * tf.log(prediction),reduction_indices=[1])) # loss
tf.summary.scalar('loss', cross_entropy)
train_step = tf.train.GradientDescentOptimizer(0.5).minimize(cross_entropy)
sess = tf.Session()
merged = tf.summary.merge_all()
# summary writer goes in here
train_writer = tf.summary.FileWriter("logs/train", sess.graph)
test_writer = tf.summary.FileWriter("logs/test", sess.graph)
sess.run(tf.global_variables_initializer())
for i in range(500):
# here to determine the keeping probability
sess.run(train_step, feed_dict={xs: X_train, ys: y_train, keep_prob: 0.5})
if i % 50 == 0:
# record loss
train_result = sess.run(merged, feed_dict={xs: X_train, ys: y_train, keep_prob: 1})
test_result = sess.run(merged, feed_dict={xs: X_test, ys: y_test, keep_prob: 1})
train_writer.add_summary(train_result, i)
test_writer.add_summary(test_result, i)
训练中keep_prob=1
时,就可以暴露出overfitting问题。keep_prob=0.5
时,dropout
就发挥了作用。 我们可以两种参数分别运行程序,对比一下结果。
当keep_prob=1
时,模型对训练数据的适应性优于测试数据,存在overfitting,输出如下: 红线是 train
的误差, 蓝线是 test
的误差.
当keep_prob=0.5
时效果好了很多,输出如下:
程序中用到了Tensorboard输出结果,可以参考前面教程: