变量重名

背景描述:编写Adversarial Autoencoder的代码,enocder部分的权值在对抗训练时会接受来自reconstruct和discriminator两个方面的gradient,在初始化不同损失函数时,会出现问题。

问题代码:

# initialize optimizers
self.loss_encoder_decoder, self.opt_encoder_decoder = self.optimizer_encoder_decoder()
self.loss_discriminator, self.opt_discriminator = self.optimizer_discriminator()
self.loss_encoder, self.opt_encoder = self.optimizer_encoder()

tensorflow错误输出:
ValueError: Variable AAE/Encoder/layer_0/W/Adam/ already exists, disallowed. Did you mean to set reuse=True in VarScope?

其中各个函数如下,问题出现在上面代码的第三行

def optimizer_encoder_decoder(self):
    vars = self.encoder.vars
    vars.extend(self.decoder.vars)
    optimizer = tf.train.AdamOptimizer(learning_rate=self.learn_rate)
    return loss, optimizer.minimize(loss, var_list=vars)
def optimizer_discriminator(self):
    optimizer = tf.train.AdamOptimizer(learning_rate=self.learn_rate)
    return loss, optimizer.minimize(loss, var_list=self.disor.vars)
def optimizer_encoder(self):
    optimizer = tf.train.AdamOptimizer(learning_rate=self.learn_rate)
    return loss, optimizer.minimize(loss, var_list=self.encoder.vars)

问题猜想:
报错的地方会产生一些“影子变量”用于optimizer对变量的更新(求出来的梯度?),故想办法改变这些“影子变量”的scope就可以。

解决方法:
AdamOptimizer创建的时候传入name参数(不使用默认的参数),比如对于第三个函数optimizer_encoder

def optimizer_encoder(self):
    optimizer = tf.train.AdamOptimizer(learning_rate=self.learn_rate, name='Adam_en')
    return loss, optimizer.minimize(loss, var_list=self.encoder.vars)

你可能感兴趣的:(变量重名)