TensorFlow variable initialization function
Initialization function | Features | The main parameters |
tf.constant_initializer | The variable is initialized to a given constant | Constant values |
tf.random_normal_initializer | The variable is initialized to a random value of a normal distribution | Normal distribution of mean and standard deviation |
tf.truncated_normal_initializer | The variable is initialized to a random value is too satisfy distribution, but if out of random values from the mean more than two standard deviations, then this number will be re-randomized. | Normal distribution of mean and standard deviation |
tf.random_uniform_initializer | The variable is initialized to satisfy evenly distributed random value | Maximum, minimum |
tf.uniform_unit_scaling_initializer | The variable is initialized to satisfy evenly distributed random value but does not affect the number of output stages | factor (coefficient by generating a random time value) |
tf.zeros_initializer | The variable is set to all 0s | Variable dimensions |
tf.ones_initializer | The variable is set to all | Variable dimensions |
# Following two definitions are equivalent V = tf.get_variable ( "V", Shape = [. 1], initialzer = tf.constant_initializer (1.0)) V = tf.Variable (tf.constant (1.0, Shape = [ 1]), name = "v ")
The biggest difference is that tf.get_variable function and tf.Variable function parameter specifies the variable name. For tf.Variable function, the variable name is an optional parameter, is given by the form name = "v" of. But for tf.get_variable function, the variable name is a mandatory parameter. tf.get_variable will take to create or obtain a variable based on the name. In the sample program, tf.get_variable first view will create a name for the parameter v, if creation fails (such as parameter has the same name), then this program will error. This is to avoid unintentional reuse variable error.
If you need to obtain a variable has been created by tf.get_variable, you need to generate a context manager by tf.variable_scope function, and clearly specified in this context manager, tf.get_variable direct access to the variables that have been generated.
# Create within a namespace named foo in the variable v with tf.variable_scope ( "foo"): v = tf.get_variable ( "v", [1], initializer = tf.constant_initializer (1.0)) # because already present in the namespace named foo variable v, the following code will be given with tf.variable_scope ( "foo"): v = tf.get_variable ( "v", [. 1]) # generating a context manager, the reuse set to True. Such tf. get_varibale function will have direct access to variables declared with tf.variable_scope ( "foo", reuse = True): v1 = tf.get_variable ( "v", [1]) Print (v == v1) # set the parameter reuse True, the representative of v, v1 represents the same TensorFlow in variable # namespaces bar not yet created a variable v, so the following code error with tf.variable_scope ( "bar", received Reuse Reuse = True): v = tf.get_variable ( "v", [1] )
The above described examples may be controlled by a function tf.get_variable tf.variable_scope function semantics. When tf.variable_scope function parameter reuse = True generates a context manager, in this context manager all tf.get_variable function direct access to the variables already created. If the variable does not exist, tf.get_variable function error. Conversely, if the function uses tf.variable_scope meal Hu reuse = None or reuse = False to create a context manager, tf.get_variable will create a new variable. If a variable with the same name already exists tf.get_variable function error.