TensorFlow入门(3):反向传播

环境:Ubuntu 16.04,Python 3,Tensoflow-gpu v1.9.0 ,CUDA 9.0

反向传播

反向传播即:训练模型参数,在所有参数上使用梯度下降,使得模型在训练数据上的损失函数最小。

神经网络实现的第一步就是准备数据集。数据集中除了特征值,还要有这组特征相对应的标签值(已知答案)。我们就是利用这些已知的数据对 < 特 征 , 标 签 > <特征, 标签> <,>来作为调参依据的。

  • 损失函数(loss):预测值 y y y与已知答案KaTeX parse error: Expected group after '_' at position 2: y_̲的差距
  • 均方误差(MSE) M S E ( y _ , y ) = ∑ i = 1 n ( y − y _ ) 2 n MSE(y\_, y)= \frac{\sum _{i=1} ^n (y-y\_)^2}{n} MSE(y_,y)=ni=1n(yy_)2
    代码表示为
loss = tf.reduce_mean(tf.square(y_ - y))

反向传播的训练目的:减少loss值。
TensorFlow提供的优化器有:

train_step = tf.train.GradientDescentOptimizer(learning_rate).minimize(loss) # 梯度下降
train_step = tf.train.MomentumOptimizer(learning_rate, momentum).minimize(loss)
train_step = tf.train.AdamOptimizer(learning_rate).minimize(loss)

其中learning_rate为学习率。

  • 学习率:参数每次更新的幅度

反向传播举例

#coding:utf-8
import tensorflow as tf
import numpy as np


BATCH_SIZE = 8          # 每次喂给NN的数据量
seed = 23455            # 随机种子
learning_rate = 0.001   # 学习率

# 0
# 基于seed产生随机数
rng = np.random.RandomState(seed)
# 随机数返回32×2的矩阵(32组的体积、重量)作为输入数据
X = rng.rand(32, 2)
# 从X中取出一行(一组体积、重量),判断:若相加小于1,则Y=1;否则Y=0
Y = [[int(x0 + x1 < 1)] for (x0, x1) in X]
print("X:\n", X)
print("Y:\n", Y)

# 1
# 定义神经网络的输入、参数和输出,定义前向传播过程
x = tf.placeholder(tf.float32, shape=(None, 2))
y_= tf.placeholder(tf.float32, shape=(None, 1))

w1 = tf.Variable(tf.random_normal([2, 3], stddev=1, seed=1))
w2 = tf.Variable(tf.random_normal([3, 1], stddev=1, seed=1))

a = tf.matmul(x, w1)
y = tf.matmul(a, w2)

# 2
# 定义损失函数及反向传播算法
loss = tf.reduce_mean(tf.square(y-y_))
train_step = tf.train.GradientDescentOptimizer(learning_rate).minimize(loss) 
# train_step = tf.train.MomentumOptimizer(learning_rate, 0.9).minimize(loss)
# train_step = tf.train.AdamOptimizer(learning_rate).minimize(loss)

# 3
# 生成会话,训练STEPS轮
with tf.Session() as sess:
    init_op = tf.global_variables_initializer()
    sess.run(init_op)
    # 输出未经训练的参数值
    print("w1:\n", sess.run(w1))
    print("w2:\n", sess.run(w2))
    print("\n")

    # 训练模型
    STEPS = 3000
    for i in range(STEPS):
        start = (i * BATCH_SIZE) % 32
        end = start + BATCH_SIZE
        sess.run(train_step, feed_dict={x: X[start:end], y_: Y[start:end]}) #每轮训练从start到end的数据
        if i % 500 == 0:    # 每500轮打印一次值
            total_loss = sess.run(loss, feed_dict={x: X, y_: Y})
            print("After %d training step(s), loss on all data is %g" % (i, total_loss))


    # 输出训练后的参数取值
    print("\n")
    print("w1:\n", sess.run(w1))
    print("w2:\n", sess.run(w2))

输出为:
X:
[[0.83494319 0.11482951]
[0.66899751 0.46594987]
[0.60181666 0.58838408]
[0.31836656 0.20502072]
[0.87043944 0.02679395]
[0.41539811 0.43938369]
[0.68635684 0.24833404]
[0.97315228 0.68541849]
[0.03081617 0.89479913]
[0.24665715 0.28584862]
[0.31375667 0.47718349]
[0.56689254 0.77079148]
[0.7321604 0.35828963]
[0.15724842 0.94294584]
[0.34933722 0.84634483]
[0.50304053 0.81299619]
[0.23869886 0.9895604 ]
[0.4636501 0.32531094]
[0.36510487 0.97365522]
[0.73350238 0.83833013]
[0.61810158 0.12580353]
[0.59274817 0.18779828]
[0.87150299 0.34679501]
[0.25883219 0.50002932]
[0.75690948 0.83429824]
[0.29316649 0.05646578]
[0.10409134 0.88235166]
[0.06727785 0.57784761]
[0.38492705 0.48384792]
[0.69234428 0.19687348]
[0.42783492 0.73416985]
[0.09696069 0.04883936]]
Y:
[[1], [0], [0], [1], [1], [1], [1], [0], [1], [1], [1], [0], [0], [0], [0], [0], [0], [1], [0], [0], [1], [1], [0], [1], [0], [1], [1], [1], [1], [1],[0], [1]]
w1:
[[-0.8113182 1.4845988 0.06532937]
[-2.4427042 0.0992484 0.5912243 ]]
w2:
[[-0.8113182 ]
[ 1.4845988 ]
[ 0.06532937]]

After 0 training step(s), loss on all data is 5.13118
After 500 training step(s), loss on all data is 0.429111
After 1000 training step(s), loss on all data is 0.409789
After 1500 training step(s), loss on all data is 0.399923
After 2000 training step(s), loss on all data is 0.394146
After 2500 training step(s), loss on all data is 0.390597

w1:
[[-0.7000663 0.91363174 0.0895357 ]
[-2.3402493 -0.14641264 0.58823055]]
w2:
[[-0.06024268]
[ 0.91956186]
[-0.06820709]]

神经网络搭建步骤

再来总结神经网络搭建的一般步骤:
0. 准备:定义常量、生成数据集
1. 前向传播:定义输入、参数、输出(x、y_、w、a、y等)
2. 反向传播:定义损失函数、反向传播方法(loss、train_step等)
3. 生成会话:训练STEPS轮

你可能感兴趣的:(AI)