Tensorboard实现神经网络的可视化

 本篇博客介绍使用Tensorboard实现神经网络的可视化,首先是实现可视化的代码:

# encoding:utf-8
import tensorflow as tf


# 添加层
def add_layer(inputs, in_size, out_size, activation_function=None):
    with tf.name_scope('layer'):
        with tf.name_scope('weights'):
            W = tf.Variable(tf.random_normal([in_size, out_size]), name='W')
        with tf.name_scope('bias'):
            b = tf.Variable(tf.zeros([1, out_size]) + 0.1, name='b')
        with tf.name_scope('Wx_plus_b'):
            Wx_plus_b = tf.matmul(inputs, W) + b
        if activation_function is None:
            outputs = Wx_plus_b
        else:
            outputs = activation_function(Wx_plus_b)
        return outputs

with tf.name_scope('inputs'):
    xs = tf.placeholder(tf.float32, [None, 1], name='x_input')
    ys = tf.placeholder(tf.float32, [None, 1], name='y_input')

# 隐藏层和输出层
l1 = add_layer(xs, 1, 10, activation_function=tf.nn.relu)
prediction = add_layer(l1, 10, 1, activation_function=None)

# 损失值
with tf.name_scope('loss'):
    loss = tf.reduce_mean(tf.reduce_sum(tf.square(ys - prediction),
                     reduction_indices=[1]))

# 用梯度下降更新loss
with tf.name_scope('train'):
    train_step = tf.train.GradientDescentOptimizer(0.1).minimize(loss)

# 初始化所有参数
init = tf.initialize_all_variables()
sess = tf.Session()
weiter = tf.summary.FileWriter("logs/", sess.graph)
sess.run(init)

 

注:该方法可能只适用于win10系统。

这段代码会在logs文件夹下生成 events.out.tfevents.1530933559.CC (示例)。

然后进入命令行下,cd到logs的上一级目录下,如我的logs在

D:\Python27\Lib\site-packages\django\bin\pylearn\deeplearning\tensorflow\logs

目录下,只需在命令行下cd到

D:\Python27\Lib\site-packages\django\bin\pylearn\deeplearning\tensorflow

目录即可。

然后运行

tensorboard --logdir=logs

最后,根据提示在浏览器中输入相关网址(如我的网址为:http://cc:6006/#graphs),在Graphs标签下即可看到创建的图

Tensorboard实现神经网络的可视化_第1张图片

你可能感兴趣的:(TensorFlow,TensorFlow)