If a variable is defined with a name that has already been used for another variable, then an exception is thrown by TensorFlow. Thetf.get_variable() function makes it convenient and safe to create a variable in place of using thetf.Variable() function. Thetf.get_variable() function returns a variable that has been defined with a given name. If the variable with the given name does not exist, then it will create the variable with the specified initializer and shape.
Consider the following example:
w = tf.get_variable(name='w',shape=[1],dtype=tf.float32,initializer=[.3]) b = tf.get_variable(name='b',shape=[1],dtype=tf.float32,initializer=[-.3])
The initializer can either be a list of values or another tensor. An initializer can also be one of the built-in initializers. Some of these are as follows:
tf.ones_initializer
tf.constant_initializer
tf.zeros_initializer
tf.truncated_normal_initializer
tf.random_normal_initializer
tf.random_uniform_initializer
tf.uniform_unit_scaling_initializer
tf.orthogonal_initializer
Thetf.get_variable() function only returns the global variables when the code is run across multiple machines in distributed TensorFlow. The local variables can be retrieved by using thetf.get_local_variable() function.
Sharing or reusing variables: Getting variables that have already been defined promotes reuse. However, an exception will be thrown if the reuse flags are not set by using tf.variable_scope.reuse_variable()or tf.variable.scope(reuse=True).
Now that we have learned how to define tensors, constants, operations, placeholders, and variables, let's learn about the next level of abstraction in TensorFlow that combines these basic elements to form a basic unit of computation: the computation graph.