TensorFlow 安装与简单示例

安装

tensorflow安装需要numpy,并且numpy和tensorflow需要用同一种方式安装,否则会报错。tensorflow有下面两种安装方式,这两种方式不能混用,也就是说通过pip安装的numpy和通过conda安装的tensorflow是不兼容的。

第一种,通过pip安装
pip install numpy
CPU版
pip install tensorflow
GPU版
pip install tensorflow-gpu
第二种,通过conda安装
conda install numpy
CPU版
conda install tensorflow
GPU版
conda install tensorflow-gpu

简单示例

下面一个例子展示了用tensorflow来求解机器学习中的回归任务,目标是将从数据中拟合出直线的权值和偏置。

import numpy as np
import tensorflow as tf

x = np.random.rand(100)
y = 0.1 * x + 0.3 

# 定义权值
Weights = tf.Variable(tf.random_uniform([1], -1, 1))
# 定义偏置
biases = tf.Variable(tf.zeros([1]))

y_pre = Weights * x + biases
loss = tf.reduce_mean(tf.square(y - y_pre))

optimizer = tf.train.GradientDescentOptimizer(0.5)
# 定义梯度
train = optimizer.minimize(loss)
# 定义初始值,初始化权值和偏置
init = tf.global_variables_initializer()

sess = tf.Session()
# 更新初始值
sess.run(init)

for step in range(201):
	# 更新梯度
    sess.run(train)
    if step % 20 == 0:
        print(step, sess.run(Weights), sess.run(biases))

有趣示例

下面一个例子使用神经网络来求解机器学习中的二次函数回归问题,并可视化每一步中的求解结果。这个例子用到了plt.pause()函数,为了展示动态效果,不要在spyder中运行程序,直接在命令行中运行。

import numpy as np
import tensorflow as tf
import matplotlib.pyplot as plt

def add_layer(inputs, in_size, out_size, activation_function = None):
    Weights = tf.Variable(tf.random_normal([in_size, out_size]))
    biases = tf.Variable(tf.zeros([1, out_size]) + 0.1)
    Wx_plus_biases = tf.matmul(inputs, Weights) + biases
    if activation_function is None:
        outputs = Wx_plus_biases
    else:
        outputs = activation_function(Wx_plus_biases)
    return outputs

x_data = np.linspace(-1,1,300)[:,np.newaxis]
noise = np.random.normal(0, 0.05, x_data.shape)
y_data = np.square(x_data) - 0.5 + noise

xs = tf.placeholder(tf.float32, [None, 1])
ys = tf.placeholder(tf.float32, [None, 1])

l1 = add_layer(xs, 1, 10, activation_function = tf.nn.relu)
prediction = add_layer(l1, 10, 1, activation_function = None)
loss = tf.reduce_mean(tf.reduce_sum(tf.square(ys - prediction),
                     reduction_indices=[1]))
train_step = tf.train.GradientDescentOptimizer(0.1).minimize(loss)

init  = tf.global_variables_initializer()

fig = plt.figure()
ax = fig.add_subplot(1,1,1)
ax.scatter(x_data, y_data)

with tf.Session() as sess:
    sess.run(init)
    for step in range(1000):
        sess.run(train_step, feed_dict={xs:x_data, ys:y_data})
        if step % 50 ==0:
            try:
                ax.lines.remove(lines[0])
            except Exception:
                pass
            print(sess.run(loss, feed_dict={xs:x_data, ys:y_data}))
            prediction_value = sess.run(prediction, feed_dict = {xs: x_data})
            lines = ax.plot(x_data, prediction_value, 'r-', lw = 5)            
            plt.pause(0.1)

你可能感兴趣的:(TensorFlow)