Tensorboard使用总结

版权声明:本文为博主原创文章,未经博主允许不得转载。 https://blog.csdn.net/loovelj/article/details/81126765

TensorBoard简介

TensorBoard是tensorlflow界面可视化的工具,由于可以将需要的观察的参数无缝衔接在TensorFlow的网络中,因此有着很便利的使用方式。

具体可参考官方文档

TensorBoard一般有两个展示形式,包括折线图和直方图,同时也提供图片展示和视频展示。

def variable_summaries(var):
  """Attach a lot of summaries to a Tensor (for TensorBoard visualization)."""
  with tf.name_scope('summaries'):
    mean = tf.reduce_mean(var)
    tf.summary.scalar('mean', mean)
    with tf.name_scope('stddev'):
      stddev = tf.sqrt(tf.reduce_mean(tf.square(var - mean)))
    #折线图展示
    tf.summary.scalar('stddev', stddev)
    tf.summary.scalar('max', tf.reduce_max(var))
    tf.summary.scalar('min', tf.reduce_min(var))
    #直方图展示
    tf.summary.histogram('histogram', var)
    #对于图像矩阵也可以展示,以下是展示前三个图片
    tf.summary.image('input', x_image, 3)

 # 创建保存的目录
session = tf.Session()
tensorboard_dir = 'tensorboard/mnist3'  
if not os.path.exists(tensorboard_dir):
    os.makedirs(tensorboard_dir)
 # 收集所有的信息,一次执行,不用每个参数都Run()一遍
merged_summary = tf.summary.merge_all()  
writer = tf.summary.FileWriter(tensorboard_dir)
writer.add_graph(session.graph)

session.run(tf.global_variables_initializer())

train_batch_size = 100

for i in range(2001):
    x_batch, y_batch = data.train.next_batch(train_batch_size)

    feed_dict = {x: x_batch, y: y_batch}

    if i % 500 == 0:
        train_accuracy = session.run(accuracy, feed_dict=feed_dict)
        print("迭代轮次: {0:>6}, 训练准确率: {1:>6.4%}".format(i, train_accuracy))

    session.run(optimizer, feed_dict=feed_dict)
    if i % 5 == 0:   
        # 从中提取出来所有要展示的,然后写到log中
        s = session.run(merged_summary, feed_dict=feed_dict)
        writer.add_summary(s, i)

要展示的话,启动cmd,输入:

tensorboard --logdir=path/to/log-directory

cmd中会给出网址,复制粘贴到网址栏即可看到

猜你喜欢

转载自blog.csdn.net/loovelj/article/details/81126765