用TensorBoard可视化TensorFlow图形

作者:chen_h
微信号 & QQ:862251340
微信公众号:coderpai


1. 介绍

在 TensorBoard 中不会自动显示关于其执行的计算图和各种模型指标。如果我们想在 TensorBoard 中显示相关的计算图和各种指标,那么我们需要向原来代码中添加一些额外的函数代码,然后 TensorBoard 会将于计算图相关的事件写入到特定文件夹中。然后我们在终端中,用命令指向这个事件文件夹就好了。

TensorBoard 上面状态栏里面的 EVENTS,IMAGES,GRAPH 和 HISTOGRAMS 选项表示我们将计算图中的哪些数据类型进行了收集。

2. 启动 TensorBoard

我们用一个例子来演示一下 TensorBoard 的具体使用,这个例子是用 softmax 还做 MNIST 手写数字分类,例子代码如下:

#!/usr/bin/env python
# -*- coding: utf-8 -*-


from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets("./MNIST_data", one_hot=True)

import tensorflow as tf


learning_rate = 0.01
training_iteration = 30
batch_size = 100
display_step = 2

x = tf.placeholder("float", [None, 784]) 
y = tf.placeholder("float", [None, 10]) 

W = tf.Variable(tf.zeros([784, 10]))
b = tf.Variable(tf.zeros([10]))

with tf.name_scope("Wx_b") as scope:
    model = tf.nn.softmax(tf.matmul(x, W) + b) 

# w_h = tf.summary.histogram("weights", W)
# b_h = tf.summary.histogram("biases", b)

with tf.name_scope("cost_function") as scope:
    cost_function = -tf.reduce_sum(y*tf.log(model))
    # tf.summary.scalar("cost_function", cost_function)

with tf.name_scope("train") as scope:
    optimizer = tf.train.GradientDescentOptimizer(learning_rate).minimize(cost_function)

init = tf.global_variables_initializer()

# merged_summary_op = tf.summary.merge_all()

with tf.Session() as sess:
    sess.run(init)

    # summary_writer = tf.summary.FileWriter('./work/logs', graph_def=sess.graph_def)

    for iteration in range(training_iteration):
        avg_cost = 0.
        total_batch = int(mnist.train.num_examples/batch_size)

        for i in range(total_batch):
            batch_xs, batch_ys = mnist.train.next_batch(batch_size)
            sess.run(optimizer, feed_dict={x: batch_xs, y: batch_ys})
            avg_cost += sess.run(cost_function, feed_dict={x: batch_xs, y: batch_ys})/total_batch

            # summary_str = sess.run(merged_summary_op, feed_dict={x: batch_xs, y: batch_ys})
            # summary_writer.add_summary(summary_str, iteration*total_batch + i)

        if iteration % display_step == 0:
            print("Iteration:", '%04d' % (iteration + 1), "cost=", "{:.9f}".format(avg_cost))

    print("Tuning completed!")

    predictions = tf.equal(tf.argmax(model, 1), tf.argmax(y, 1))
    accuracy = tf.reduce_mean(tf.cast(predictions, "float"))
    print("Accuracy:", accuracy.eval({x: mnist.test.images, y: mnist.test.labels}))

要收集有关 TensorFlow 计算图中特定节点的数据,我们需要做一个 summary 操作。比如,如果你想要去可视化权重和偏差项的分布,那么你应该使用 tf.summary.histogram 操作。如下:

with tf.name_scope("Wx_b") as scope:
    model = tf.nn.softmax(tf.matmul(x, W) + b)

# Add summary ops to collect data
w_h = tf.summary.histogram("weights", W)
b_h = tf.summary.histogram("biases", b)

下图就是它在 DISTRIBUTIONS 选项中的图表示:

对于损失函数的可视化,我们可以使用 tf.summary.scalar 操作。具体代码如下:

with tf.name_scope("cost_function") as scope:
    cost_function = -tf.reduce_sum(y*tf.log(model))
    tf.summary.scalar("cost_function", cost_function)

下图就是它在 SCALARS 选项中的图表示:

要查看计算图,我们只需要点击 TensorBoard 上面的 GRAPHS 按钮就可以了。如果你的计算图非常大,有上千个节点,那么在单张视图上面进行可视化是非常苦难的。为了可视化更加方便,我们可以使用具有特定名称的 tf.name_scope 将逻辑相关的操作组织到一个模块中。比如,Wx_b 模块,cost_function 模块。

with tf.name_scope("Wx_b") as scope:
    model = tf.nn.softmax(tf.matmul(x, W) + b) # Softmax

或者:

with tf.name_scope("cost_function") as scope:
    cost_function = -tf.reduce_sum(y*tf.log(model))

默认情况下,计算图只会显示节点层次结构的顶部文件。我们可以点击上面的加好,然后这个模块就会被展开。例如,我们点击 Wx_b 组:

现在,我们将所有操作汇总合并到一个操作中,使用 tf.summary.merge_all() 可以实现这个操作。

merged_summary_op = tf.summary.merge_all()

然后,我们指定一个文件夹来存储这些可视化文件,如下:

summary_writer = tf.summary.FileWriter('./work/logs', graph_def=sess.graph_def)

编写每次迭代之后的写入操作:

summary_str = sess.run(merged_summary_op, feed_dict={x: batch_xs, y: batch_ys})
summary_writer.add_summary(summary_str, iteration*total_batch + i)

之后,我们在终端启动这个 TensorBoard :

tensorboard --logdir=./work/logs

总而言之,TensorBoard 使用 TensorFlow 的事件文件来可视化与其模型相关的计算图和参数数据。完整代码,如下所示:

#!/usr/bin/env python
# -*- coding: utf-8 -*-


from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets("./MNIST_data", one_hot=True)

import tensorflow as tf


learning_rate = 0.01
training_iteration = 30
batch_size = 100
display_step = 2

x = tf.placeholder("float", [None, 784]) 
y = tf.placeholder("float", [None, 10]) 

W = tf.Variable(tf.zeros([784, 10]))
b = tf.Variable(tf.zeros([10]))

with tf.name_scope("Wx_b") as scope:
    model = tf.nn.softmax(tf.matmul(x, W) + b) 

w_h = tf.summary.histogram("weights", W)
b_h = tf.summary.histogram("biases", b)

with tf.name_scope("cost_function") as scope:
    cost_function = -tf.reduce_sum(y*tf.log(model))
    tf.summary.scalar("cost_function", cost_function)

with tf.name_scope("train") as scope:
    optimizer = tf.train.GradientDescentOptimizer(learning_rate).minimize(cost_function)

init = tf.global_variables_initializer()

merged_summary_op = tf.summary.merge_all()

with tf.Session() as sess:
    sess.run(init)

    summary_writer = tf.summary.FileWriter('./work/logs', graph_def=sess.graph_def)

    for iteration in range(training_iteration):
        avg_cost = 0.
        total_batch = int(mnist.train.num_examples/batch_size)

        for i in range(total_batch):
            batch_xs, batch_ys = mnist.train.next_batch(batch_size)
            sess.run(optimizer, feed_dict={x: batch_xs, y: batch_ys})
            avg_cost += sess.run(cost_function, feed_dict={x: batch_xs, y: batch_ys})/total_batch

            summary_str = sess.run(merged_summary_op, feed_dict={x: batch_xs, y: batch_ys})
            summary_writer.add_summary(summary_str, iteration*total_batch + i)

        if iteration % display_step == 0:
            print("Iteration:", '%04d' % (iteration + 1), "cost=", "{:.9f}".format(avg_cost))

    print("Tuning completed!")

    predictions = tf.equal(tf.argmax(model, 1), tf.argmax(y, 1))
    accuracy = tf.reduce_mean(tf.cast(predictions, "float"))
    print("Accuracy:", accuracy.eval({x: mnist.test.images, y: mnist.test.labels}))

来源:altoros

你可能感兴趣的:(Tensorflow)