Tensorflow common interface

1. tf.get_variable

Official website link: https://tensorflow.google.cn/versions/r1.15/api_docs/python/tf/get_variable

Function: Get an existing variable with these parameters or create a new variable
Description: The interface in tensorflow1.0 is compatible with this interface in tensorflow2.0 with tf.compat.v1.get_variable

# 官方接口
tf.get_variable(
    name, shape=None, dtype=None, initializer=None, regularizer=None,
    trainable=None, collections=None, caching_device=None, partitioner=None,
    validate_shape=True, use_resource=None, custom_getter=None, constraint=None,
    synchronization=tf.VariableSynchronization.AUTO,
    aggregation=tf.VariableAggregation.NONE
)

 This function prepends the name with the current variable scope and performs a check

Code example:

def foo():
  with tf.variable_scope("foo", reuse=tf.AUTO_REUSE):
    v = tf.get_variable("v", [1])
  return v
v1 = foo()  # Creates v.
v2 = foo()  # Gets the same, existing v.
print(v1 == v2) 
# True
print(v2.name)
# "foo/v:0"

2. tf.variable_scope

Official website link: https://tensorflow.google.cn/versions/r1.15/api_docs/python/tf/variable_scope

Role: context manager, used to define the operation of creating variables.
Description: The interface in tensorflow1.0 is compatible with this interface in tensorflow2.0 with tf.compat.v1.variable_scope

# 官方接口
tf.variable_scope(
    name_or_scope, 
    default_name=None,
    values=None, 
    initializer=None,
    regularizer=None, 
    caching_device=None, 
    partitioner=None, 
    custom_getter=None,
    reuse=None, 
    dtype=None, 
    use_resource=None, 
    constraint=None,
    auxiliary_name_scope=True
)

 This context manager verifies that the values ​​are from the same graph, ensures that the graph is the default, and pushes name scopes and variable scopes.
 Just focus on the first variable name_or_scope , name_or_scope is String or VariableScope type, the user can define or open the context scope

Code example:

with tf.variable_scope("foo"):
    with tf.variable_scope("bar"):
        v = tf.get_variable("v", [1])
        print(v.name)  # "foo/bar/v:0"

3. tf.global_variables

Official link: https://tensorflow.google.cn/versions/r1.15/api_docs/python/tf/global_variables

Concept: A global variable is a variable that is shared within the environment. The Variable() constructor or get_Variable() will automatically add new variables to the graphics collection
Function: Return global variables
Description: The interface in tensorflow1.0 is compatible with this interface in tensorflow2.0 with tf.compat.v1.global_variables

# 官方接口
tf.global_variables(scope=None)

If scope is None, returns a list of all global variable Variable objects;
if scope is not None, returns a list of Variable objects in the specified scope

Code example:

# 创建两个变量v1和v2
v1 = tf.Variable(tf.constant(0.0, shape=[1], dtype=tf.float32), name='v')
with tf.variable_scope("foo"):
    v2 = tf.get_variable("v2", [1])
# 通过tf.global_variables接口获取
print(tf.global_variables())     
# [<tf.Variable 'v:0' shape=(1,) dtype=float32_ref>, <tf.Variable 'foo/v2:0' shape=(1,) dtype=float32_ref>]
print(tf.global_variables("foo"))  
# [<tf.Variable 'foo/v2:0' shape=(1,) dtype=float32_ref>]

4. tf.global_variables_initializer

Function: Initialize values ​​for global variables
Explanation: The mechanism of tensorflow is to define only when defining variables, and not to actually perform initialization. After the program is determined to be initialized, that is, tf.global_variable_initializer is actually assigned to the global variable

Code example:

# 创建两个变量a和b
# tf中建立的变量是没有初始化的,现在还不是一个tensor量,而是一个Variable变量类型
a = tf.Variable(tf.random_uniform([1], -1.0, 1.0), name="a")
b = tf.Variable(tf.constant(1), name="b")

# 1. 不执行初始化,直接打印变量 -- 报错
with tf.Session() as sess:
    print("a.name: {}   value: {}".format(a.name, sess.run(a)))
    print("b.name: {}   value: {}".format(b.name, sess.run(b)))
# 报错(没有初始化值):FailedPreconditionError: Attempting to use uninitialized value a[[{
    
    {node _retval_a_0_0}}]]

# 2. 执行初始化后,打印变量
with tf.Session() as sess:
    sess.run(tf.global_variables_initializer())
    print("a.name: {}   value: {}".format(a.name, sess.run(a))) # a.name: a:0   value: [0.69489884]
    print("b.name: {}   value: {}".format(b.name, sess.run(b))) # b.name: b:0   value: 1

5. tf.train.Saver

Official link: https://tensorflow.google.cn/versions/r1.15/api_docs/python/tf/train/Saver#args

Function: save and restore variables, often used for model saving

# 官方接口
tf.train.Saver(
    var_list=None, reshape=False, sharded=False, max_to_keep=5,
    keep_checkpoint_every_n_hours=10000.0, name=None, restore_sequentially=False,
    saver_def=None, builder=None, defer_build=False, allow_empty=False,
    write_version=tf.train.SaverDef.V2, pad_step_number=False,
    save_relative_paths=False, filename=None
)
  • var_list : List of Variable/SaveableObject, or dictionary mapping names to SaveableObjects. If None, defaults to a list of all saveable objects
  • max_to_keep : The maximum number of recent checkpoints to keep. The default value is 5.

(1) save method

Role: variable storage

# 官方接口
save(
    sess, save_path, global_step=None, latest_filename=None,
    meta_graph_suffix='meta', write_meta_graph=True, write_state=True,
    strip_default_attrs=False, save_debug_info=False
)
  • sess : session used to save variables
  • save_path : string type, model save path
  • global_step : If not None, add to the model save path as the model prefix name

(2) restore

Function: Restore previously saved variables

# 官方接口
restore(
    sess, save_path
)

 Train a simple model, fit y = 2*x + 1, show model saving and loading, code example:

import numpy as np
import tensorflow as tf

# 1. 准备一组训练数据x 和 y
x = np.random.rand(100).astype(np.float32)
y = x * 2 + 1

# 2. 搭建模型  y = w*x +b
# 创建两个变量, 用于拟合
w = tf.Variable(tf.random_uniform([1], -1.0, 1.0))
b = tf.Variable(tf.zeros([1]))
y_pre = w * x + b
# 构建损失函数
loss = tf.reduce_mean(tf.square(y_pre - y))
# 创建优化器--梯度下降法
optimizer = tf.train.GradientDescentOptimizer(0.5)
# 实例化模型保存对象
saver=tf.train.Saver(max_to_keep=10)

# 3. 模型训练
with tf.Session() as sess:
    # 全局变量初始化
    sess.run(tf.global_variables_initializer())
    for step in range(61):
        sess.run(optimizer.minimize(loss))
        if step % 20 == 0:
            print("step: {}  w: {}  b: {}".format(step, sess.run(w), sess.run(b)))
            # 模型保存
            saver.save(sess=sess, save_path=r'C:\Users\ASUS\Desktop\model\my-model', global_step=step)
'''
打印输出:
step: 0  w: [1.0791051]  b: [2.2803402]
step: 20  w: [1.7042794]  b: [1.1717249]
step: 40  w: [1.92974]  b: [1.0408]
step: 60  w: [1.983307]  b: [1.0096936]
'''

The file saving status is as follows:
insert image description here
load the specified model and print the corresponding parameters:

# 模型加载,打印加载的变量参数
with tf.Session() as sess:
    saver.restore(sess=sess, save_path=r'C:\Users\ASUS\Desktop\model\my-model-20')
    print("w: {}  b: {}".format(sess.run(w), sess.run(b)))
# w: [1.7042794]  b: [1.1717249]

6. tf.train.exponential_decay

Official website link: https://tensorflow.google.cn/versions/r1.15/api_docs/python/tf/train/exponential_decay

Role: apply exponential decay to the learning rate

# 官方接口
tf.train.exponential_decay(
    learning_rate, global_step, decay_steps, decay_rate, staircase=False, name=None
)
'''
learning_rate:  初始学习率
global_step:    当前迭代次数
decay_steps:    衰减次数, 可理解为当global_step==decay_steps时,learning_rate衰减为learning_rate * decay_rate
decay_rate:     学习率衰减系数,通常介于0-1之间
staircase=False: 若staircase为True,则global_step/decay_steps始终取整数,每迭代decay_steps衰减一次,变化曲线是阶梯状。
name=None:
'''

Code example:

import numpy as np
import tensorflow as tf

# 1.准备一组训练数据 y = w*x + b
x = np.random.rand(100).astype(np.float32)
y = x * 2 + 1

# 2. 搭建模型  y = w*x +b
# 创建两个变量, 用于拟合
w = tf.Variable(tf.random_uniform([1], -1.0, 1.0))
b = tf.Variable(tf.zeros([1]))
y_pre = w * x + b
# 构建损失函数
loss = tf.reduce_mean(tf.square(y_pre - y))
# 创建优化器--梯度下降法
global_steps = tf.train.create_global_step()
starter_learning_rate = 0.5
decay_steps = 10 # 衰减次数 (即每迭代10次,学习率衰减0.9)
decay_rate = 0.9
learning_rate = tf.train.exponential_decay(learning_rate=starter_learning_rate, global_step=global_steps, decay_steps=10, decay_rate=decay_rate, staircase=False, name='learning_rate')
optimizer = tf.train.GradientDescentOptimizer(learning_rate)
# 实例化模型保存对象
saver=tf.train.Saver(max_to_keep=10)

# 3. 模型训练
with tf.Session() as sess:
    # 全局变量初始化
    sess.run(tf.global_variables_initializer())
    # 迭代60次
    for step in range(1, 61):
        sess.run(optimizer.minimize(loss, global_step=global_steps))
        if step % 20 == 0:
            print("step: {}  global_step: {}  w: {}  b: {}".format(step, sess.run(global_steps), sess.run(w), sess.run(b)))
            # 模型保存
            saver.save(sess=sess,save_path=r'C:\Users\ASUS\Desktop\model\my-model', global_step=global_steps

When staircase=True and False are set respectively, the learning rate decay during training is as follows:
insert image description here

7. tf.placeholder

Function: Pre-insert placeholders for tensor, which will be filled with data when subsequent operations are performed

  • key point: If the operation is performed directly, an error will be reported. Its value must be filled with feed_dict before being sent to Session.run(), Tensor.eval() or Operation.run().
# 官方接口
tf.placeholder(
    dtype, shape=None, name=None
)
'''
dtype: 填充到tensor中的数据类型
shape: 填充到tensor的数据维度,若shape=None,可以填充任意维度的数据
name:  名称
'''

Code example:

x = tf..placeholder(tf.float32, shape=(3, 3)
y = tf.matmul(x, x) # 矩阵乘法
with tf.compat.v1.Session() as sess:
  # print(sess.run(y))  # ERROR: will fail because x was not fed.
  rand_array = np.random.rand(3, 3)
  print(sess.run(y, feed_dict={
    
    x: rand_array}))  # Will succeed.
'''
[[0.86674637 0.97794515 1.1810058 ]
 [0.8465452  0.99383634 1.4519857 ]
 [0.1171783  0.14964394 0.45982745]]
'''

Guess you like

Origin blog.csdn.net/yewumeng123/article/details/131342886