tensorboard visualization, detailed case

 
 
# -*- coding: utf-8 -*-
import tensorflow as tf
graph=tf.Graph()
with graph.as_default():
#Defines the total number of training steps, and the cumulative sum parameter of the output value
    with tf.name_scope('variables'):
        global_step=tf.Variable(0,dtype=tf.int32,trainable=False,name='global_step')
        total_output=tf.Variable(0.0,dtype=tf.float32,trainable=False,name='total_output')
#defines the entire computational graph
    with tf.name_scope('transformation'):
#Define the input placeholder in the calculation graph, the shape can be arbitrary
        with tf.name_scope('input'):
            a=tf.placeholder(tf.float32,shape=[None],name='input_placeholder_a')
#Define the operation in the calculation graph: firstly multiply the elements of the a matrix to get b, and sum the elements of the a matrix to get c
        with tf.name_scope('intermediate_layer'):
            b=tf.reduce_prod(a,name='product_b')
            c=tf.reduce_sum(a,name='sum_c')
#Define the final output operation in the calculation graph, get the output value of b+c above
        with tf.name_scope('ouput'):
            output=tf.add(b,c,name='ouput')
#Update the overall variable, the total number of training steps, and the cumulative sum parameter of the output value
    with tf.name_scope('update'):
        update_total=total_output.assign_add(output)
        increment_step=global_step.assign_add(1)
#Using the above calculation results, generate a line chart of 3 tensors
    with tf.name_scope('summaries'):
        avg=tf.div(update_total,tf.cast(increment_step,tf.float32),name='average') tf.summary.scalar('output_summary',output,)
        tf.summary.scalar('total_summary',update_total)
        tf.summary.scalar('average_summary',avg)

#Initialize all variables and summarize the 3 polyline data
    with tf.name_scope('global_ops'):
        init=tf.initialize_all_variables()
        merged_summaries=tf.summary.merge_all()
#Open the session to actually calculate, and specify that the session is based on the graph generated by itself. Open a writer object, specify the calculation graph and save path to record '' to save the data, it will be automatically generated without this file, which is the file starting with events.out        
	sess=tf.Session(graph=graph)
	sess.run(init)
#Write the computational graph data to the events file, and you can see the visualization of the computational graph
	writer=tf.summary.FileWriter('./improved_graph',graph)

#Define a function of train. To get the value, you must have sess.run() to get it. Next, the placeholder tensor is defined, which must be fed - the dict value is a placeholder to specify the same type of data, and the key is the name of the placeholder
	def run_graph(input_tensor):
    	     feeddict={a:input_tensor}                
             # Indicates that after the 3 tensor data is aggregated, the specific value is obtained                  
             _,step,summary=sess.run([output,increment_step,merged_summaries],feed_dict=feeddict)

#Write the polyline data of the 3 tensors to the events file and specify the x-axis of the polyline, and you can see the visualization of the line chart
	writer.add_summary(summary,global_step=step)

#start training
	run_graph([1,2,3])
	run_graph([11,4])
	run_graph([4,1])
#Write data to disk
	writer.flush()
#close object
	writer.close()
	sess.close()

The above is the complete code, which can be run directly.

However, in the process of learning, I found a lot of problems with function changes in the old version. It is recommended that you go to the tf official website to learn

https://tensorflow.google.cn/versions/r1.7/api_docs/java/reference/org/tensorflow/package-summary

There is also a Chinese tf document, but it is still a bit too slow to update

https://www.w3cschool.cn/tensorflow_python/tensorflow_python-jsl62hyf.html

=================

Go to your current Python file directory on Ubuntu, the improved_graph file will not be automatically generated. enter,

tensorboard --logdir='./improved_graph' --port=6008

Then enter the remote IP number of the Ubuntu system computer on the Google browser of the remote computer: 6008

You can see the following interface:

This is a line chart that generates a tensor, and the code is written in it


This is a computational graph, which is very interesting. According to the scope domain and tensor name defined in your code, it will be written in it. Double-click to zoom in and restore. After zooming in, you can scroll the mouse to choose to continue zooming in. The details inside can also be double-clicked to open


Guess you like

Origin http://43.154.161.224:23101/article/api/json?id=325818427&siteId=291194637