WARNING: Logging before flag parsing goes to stderr.
calling dropout (from tensorflow.python.ops.nn_ops) with keep_prob is deprecated and will be removed in a future version.
Instructions for updating:
Please use ‘rate’ instead of ‘keep_prob’. Rate should be set to ‘rate = 1 - keep_prob’.
tf.nn.dropout() 源码如下:
def dropout(x, keep_prob=None, noise_shape=None, seed=None, name=None, rate=None):
'''官方备注节选
Args:
x: A floating point tensor.
keep_prob: (deprecated) A deprecated alias for `(1-rate)`
rate: A scalar `Tensor` with the same type as `x`. The probability that each
element of `x` is discarded.
'''
try:
keep = 1. - keep_prob if keep_prob is not None else None
except TypeError:
raise ValueError("keep_prob must be a floating point number or Tensor "
"(got %r)" % keep_prob)
rate = deprecation.deprecated_argument_lookup(
"rate", rate,
"keep_prob", keep)
if rate is None:
raise ValueError("You must provide a rate to dropout.")
return dropout_v2(x, rate, noise_shape=noise_shape, seed=seed, name=name)
官方备注,解读如下:
扔掉(discarded)
的概率 ,而且,数值上 rate = 1. - keep_prob
,emmmm,就是说,本来,keep_prob是一个和x形状一样的一个Tensor ,keep_prob 值规定了 x 里每个元素被keep的概率举几个栗子:
import tensorflow as tf
sess = tf.InteractiveSession()
# 做准备
prob_keep = tf.placeholder(tf.float32) # 保留的概率
prob_drop = tf.placeholder(tf.float32) # 扔掉的概率
x = tf.Variable(tf.ones([10]))
# dropout
y1 = tf.nn.dropout(x, prob_keep ) # 等价于 y1 = tf.nn.dropout(x, keep_prob = prob_keep )
y2 = tf.nn.dropout(x, rate = prob_drop)
init = tf.global_variables_initializer( )
sess.run(init)
print(sess.run({'x':x, 'y1':y1},
feed_dict = {prob_keep : 0.2}))
print(sess.run({'x':x,'y2':y2},
feed_dict = {prob_drop: 0.2 }))
结果是这样子的
x [1., 1., 1., 1., 1., 1., 1., 1., 1., 1.]
y1 [0., 5.0000005, 0., 5.0000005, 0., 5.0000005, 0. , 0. , 5.0000005, 5.0000005]
y2 [1.25, 1.25, 1.25, 1.25, 1.25, 1.25, 0. , 1.25, 0. , 0. ]
y1 对x执行dropout,将5个元素置为0,剩余5个,变为 原值 1 * 1/0.2
y2 对x执行dropout,将3个元素置为0,剩余7个,变为 原值 1 * 1/(1-0.2)
鹅,执行y1时就会报WARNING
所以,下次如果报WARNING
你看到的教程里边这句,
h_fc1_drop = tf.nn.dropout(h_fc1, keep_prob )
改成:
h_fc1_drop = tf.nn.dropout(h_fc1, rate = 1-keep_prob
)
就没有警报了
或者你如果有强迫症的话,干脆上一句也改了:
keep_prob = tf.placeholder("float")
h_fc1_drop = tf.nn.dropout(h_fc1, keep_prob )
改成
drop_prob
= tf.placeholder(“float”)
h_fc1_drop = tf.nn.dropout(h_fc1, rate = drop_prob
)
但是!!!这里!!!后边训练的时候的赋值是要改的!!就下面三处:
drop_prob :0.0
})drop_prob :0.5
})drop_prob :0.0
}) )