TensorFlow变量初始化

tf.get_variable的初始化调用为:
tf.get_variable(name, shape=None, initializer=None, dtype=tf.float32, trainable=True, collections=None)
其中initializer就是变量初始化的方法,初始化的方式有以下种类:
initializer = tf.constant_initializer(const):常量初始化函数
initializer = tf.random_normal_initializer():正态分布初始化函数
initializer = tf.truncated_normal_initializer(mean = 0.0, stddev = 1.0, seed = None, dtype = dtypes.float32):截取的正态分布初始化函数
initializer = tf.random_uniform_initializer(minval = 0, maxval = None, seed = None, dtype = dtypes.float32):均匀分布初始化函数
initializer = tf.zeros_initializer():全0常量初始化函数
initializer = tf.ones_initializer():全1常量初始化函数
initializer = tf.uniform_unit_scaling_initializer(factor = 1.0, seed = None, dtype = dtypes.float32):均匀分布(不指定最小、最大值),初始化函数
initializer = tf.variance_scaling_initializer(scale = 1.0, mode = "fan_in", distribution = "normal", seed = None, dtype = dtypes.float32):由mode确定是截取的正态分布,还是均匀分布初始化函数
initializer = tf.orthogonal_initializer():正交矩阵初始化函数
initializer = tf.glorot_uniform_initializer():由输入单元节点数和输出单元节点数确定的均匀分布初始化函数
initializer = tf.glorot_normal_initializer():由输入单元节点数和输出单元节点数确定的截取的正态分布初始化函数
PS: tf.get_variable中initializer的初始化不需要再指定shape了,已经在外面指定。

基本的变量初始化为:
tf.ones(shape, dtype = tf.float32, name = None)
tf.zeros(shape, dtype = tf.float32, name = None)
tf.ones_like(tensor, dtype = None, name = None)
tf.zeros_like(tensor, dtype = None, name = None)
tf.fill(dim, value, name = None)
tf.constant(value, dtype = None, shape = None, name = None)
tf.linspace(start, stop, num, name = None)
tf.range(start, limit = None, delta = 1, name = None)
tf.random_normal(shape, mean = 0.0, stddev = 1.0, dtype = tf.float32, seed = None, name = None)
tf.truncated_normal(shape, mean = 0.0, stddev = 1.0, dtype = tf.float32, seed = None, name = None)
tf.random_uniform(shape, minval = 0, maxval = None, dtype = tf.float32, seed = None, name = None)
tf.random_shuffle(value, seed =None, name = None)
tf.set_random_seed(seed):设置产生随机数的种子
例如:
tf.set_random_seed(123456789)
varA = tf.random_normal([1.0])

你可能感兴趣的:(TensorFlow变量初始化)