Tensorflow tf.GradientTape() 查看每一次epoch参数更新

这是一个非常简单的例子。

拟合 y  = x * 3 + 2(其中x是一维,y是是实数)

import tensorflow as tf

tf.enable_eager_execution()
tf.executing_eagerly()
tfe = tf.contrib.eager

NUM_EAMPLES = 1000
training_inputs = tf.random_normal([NUM_EAMPLES])
noise = tf.random_normal([NUM_EAMPLES])
training_outputs = training_inputs*3 + 2 + noise

def prediction(input,weight,bias):
    return input * weight + bias

def loss(weights,biases):
    error = prediction(training_inputs,weights,biases) - training_outputs
    return tf.reduce_mean(tf.square(error))

def grad(weights,biases):
    with tf.GradientTape() as tape:
        loss_value = loss(weights,biases)
    return tape.gradient(loss_value,[weights,biases])

traing_steps = 200
learning_rate = 0.01
W = tf.Variable(0.)
B = tf.Variable(0.)

print("Initial loss: {:.3f}".format(loss(W,B)))
for i in range(traing_steps):
    dw,db = grad(W,B)
    W.assign_sub(dw*learning_rate)
    B.assign_sub(db*learning_rate)
    if i%20 == 0:
        print("Loss at step {:03d}: {:.3f}".format(i,loss(W,B)))
print("Final loss: {:.3f}".format(loss(W,B)))
print("W = {}, B = {}".format(W.numpy(),B.numpy()))

程序输出

Initial loss: 14.057
Loss at step 000: 13.534
Loss at step 020: 6.526
Loss at step 040: 3.425
Loss at step 060: 2.053
Loss at step 080: 1.446
Loss at step 100: 1.177
Loss at step 120: 1.058
Loss at step 140: 1.005
Loss at step 160: 0.982
Loss at step 180: 0.972
Final loss: 0.967
W = 2.950300931930542, B = 1.959231972694397

你可能感兴趣的:(Tensorflow tf.GradientTape() 查看每一次epoch参数更新)