tensorflow的loss,dropout,AdamOptimizer

二次代价函数

loss = tf.reduce_mean(tf.square(y-prediction))
loss = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(labels=y,logits=prediction))

使用梯度下降法

train_step = tf.train.GradientDescentOptimizer(0.2).minimize(loss)
train_step = tf.train.AdamOptimizer(1e-2).minimize(loss)

droput

drop = tf.nn.dropout(a,keep_prob) 

你可能感兴趣的:(tensorflow的loss,dropout,AdamOptimizer)