Tensorflow 莫凡学习
本篇文章详解讲解了tensorflow的一个简单的实例,具体视频请见B站——Tensorflow 搭建自己的神经网络 (莫烦 Python 教程)。
所采用的框架是 Win10 + Tensorflow1.13 + Pycharm,目标是对Y = x*x 函数进行学习后的拟合。
def add_layer(inputs, in_size, out_size, activation_function = None):
Weights = tf.Variable(tf.random_normal([in_size,out_size]))
biases = tf.Variable(tf.zeros([1,out_size]) + 0.1)
Wx_plus_b = tf.matmul(inputs,Weights) + biases
if activation_function is None:
outputs = Wx_plus_b
else:
outputs = activation_function(Wx_plus_b)
return outputs
# np.newaxis 变成矩阵形式
x_data = np.linspace(-1,1,300)[:,np.newaxis]
#添加噪点,正态分布坐标为0,标准差为0.01
noise = np.random.normal(0,0.05,x_data.shape)
#生成对应的 y 的点
y_data = np.square(x_data) + noise
xs = tf.placeholder(tf.float32,[None,1])
ys = tf.placeholder(tf.float32,[None,1])
设置输入层的初始值
l1 = add_layer(xs,1,10,activation_function = tf.nn.relu)
#设置输出层的初始值
predition = add_layer(l1,10,1,activation_function = None)
#计算误差
loss = tf.reduce_mean(tf.reduce_sum(tf.square(ys - predition),reduction_indices=[1]))
tf.reduce_mean是计算张量tensor沿指定数轴上的平均值,主要是用做降维或计算图像tensor的平均值。tf.reduce_sum是计算指定轴上方向的所有元素的累加和。
train_step = tf.train.GradientDescentOptimizer(0.1).minimize(loss)
init = tf.initialize_all_variables()
sess = tf.Session()
sess.run(init)
sess有两种写法,一种是上图所示的直接用sess = tf.Session()。另一种是类似于文件的打开,采用:with tf.Session() as sess来打开。session是会话控制,是tensorflow为了控制和输出文件的执行语句,运行session.run()可以获得你想要的运算结果。
fig = plt.figure()
ax = fig.add_subplot(1,1,1)
ax.scatter(x_data,y_data)
plt.ion()
plt.show()
对函数的拟合操作进行可视化展示,其中plt.icon()函数表示运行所有的图片而不仅仅只展示plt.show()之前的图片。
for i in range(1000):
sess.run(train_step,feed_dict = {xs:x_data, ys:y_data})
if (i % 50 == 0) :
print(sess.run(loss,feed_dict = {xs:x_data,ys:y_data}))
try:
ax.lines.remove(lines[0])
except Exception:
pass
# 计算神经网络的预测值
predition_value = sess.run(predition,feed_dict = {xs:x_data,ys:y_data})
# 对预测值可视化展示
lines = ax.plot(x_data,predition_value,c='r',lw=5)
plt.pause(0.1)
loss值的打印如下。由图片可以看出,经过神经网络的学习,即经过优化器优化后loss值逐渐变小,具体函数为:tf.train.GradientDescentOptimizer(0.1).minimize(loss)
如上图所示,蓝色的点即为ax.scatter(x_data,y_data)生成对应x,y数据的散点图。而红色的线表示对预测值的展示,代码为:lines = ax.plot(x_data,predition_value,c=‘r’,lw=5)
import tensorflow as tf
import numpy as np
import matplotlib.pyplot as plt
"""定义层进行修改"""
def add_layer(inputs, in_size, out_size, activation_function = None):
Weights = tf.Variable(tf.random_normal([in_size,out_size]))
biases = tf.Variable(tf.zeros([1,out_size]) + 0.1)
Wx_plus_b = tf.matmul(inputs,Weights) + biases
if activation_function is None:
outputs = Wx_plus_b
else:
outputs = activation_function(Wx_plus_b)
return outputs
"""导入数据"""
# np.newaxis 变成矩阵形式
x_data = np.linspace(-1,1,300)[:,np.newaxis]
# 添加噪点,正态分布坐标为0,标准差为0.01
noise = np.random.normal(0,0.05,x_data.shape)
# 生成对应的 y 的点
y_data = np.square(x_data) + noise
xs = tf.placeholder(tf.float32,[None,1])
ys = tf.placeholder(tf.float32,[None,1])
# 设置输入层的初始值
l1 = add_layer(xs,1,10,activation_function = tf.nn.relu)
# 设置输出层的初始值
predition = add_layer(l1,10,1,activation_function = None)
# 计算误差
loss = tf.reduce_mean(tf.reduce_sum(tf.square(ys - predition),reduction_indices=[1]))
"""搭建tensorflow网络"""
train_step = tf.train.GradientDescentOptimizer(0.1).minimize(loss)
init = tf.initialize_all_variables()
sess = tf.Session()
sess.run(init)
"""进行可视化"""
fig = plt.figure()
ax = fig.add_subplot(1,1,1)
ax.scatter(x_data,y_data)
plt.ion()
plt.show()
"""进行训练"""
for i in range(1000):
sess.run(train_step,feed_dict = {xs:x_data, ys:y_data})
if (i%50 == 0) :
print(sess.run(loss,feed_dict = {xs:x_data,ys:y_data}))
try:
ax.lines.remove(lines[0])
except Exception:
pass
# 计算神经网络的预测值
predition_value = sess.run(predition,feed_dict = {xs:x_data,ys:y_data})
# 对预测值可视化展示
lines = ax.plot(x_data,predition_value,c='r',lw=5)
plt.pause(0.1)
plt.pause(100)