我的想法是,将G的输入增加对应数字项;
在D的输出中,分为real和fake的判别输出,以及num的值输出;
用对真输入的判别和估值作为D的loss;
用对假输入的判别和估值作为G的loss;
这些思路都与原来变化不大,改动量较少;
进一步想法,用判别的loss和估值的loss分别进化网络不同层的参数,那是下一步想法。
在input函数中增加noise_num和real_num的输出项。由于我目前还不会将两个张量作为输入,只能在input中将noise的img和num合并再输入到G中了。
def get_inputs(real_size, noise_size, noise_n):
real_img = tf.placeholder(tf.float32, [None, real_size], name='real_img')
real_num = tf.placeholder(tf.float32, [None, 1], name='real_num')
noise_img_o = tf.placeholder(tf.float32, [None, noise_size], name='noise_img_o')
noise_num = tf.placeholder(tf.float32, [None, 1], name='noise_num')
#real_img = tf.concat([real_img_o,real_num], 1, name='real_img')
noise_img = tf.concat([noise_img_o, noise_num], 1, name='noise_img')
return real_img, noise_img, real_num, noise_num
G网络基本没有变化,只是输入宽度增加了1,其他没有变化;
D网络在网络末端进行了修改;
def get_discriminator(img, n_units, reuse=False, alpha=0.01):
with tf.variable_scope("discriminator", reuse=reuse):
hidden1 = tf.layers.dense(img, n_units)
hidden1 = tf.maximum(alpha * hidden1, hidden1)
logits_max = tf.layers.dense(hidden1, 10)
outputs_num = tf.nn.softmax(logits_max)
logits_sig = tf.layers.dense(logits_max, 1)
outputs = tf.sigmoid(logits_sig)
return logits_sig, logits_max, outputs_num, outputs
这里,就按照上边想法的方式进行输出。但是,我的天!!!我不知道为什么,在d_loss = tf.add(d_loss_real, d_loss_fake, d_loss_num_real)
这句就一直都会报错!!!
这三个值应该都是tensor类型啊,都是tf.reduce_mean求出来。我真的不懂了,弄了一整天!!!
这里我真的要放弃tensorflow了,虽然刚刚领悟到他绘图以及sess的好,但是绘图过程中出现这个问题,真的不知道怎么调整啊。目前我感觉应该是这三个tensor求和处,各个tensor维度不同导致的问题。但是,由于没到session呢,用tfdbg都看不到tensor值啊。我不知道怎么办了,感觉老手应该秒解决这个问题。**我放弃了!**我去看pytorch了。
File "", line 1, in
runfile('E:/nuts/20180502_personal/learn/DeepLearning/mnist_GAN/try_Mnist_GAN.py', wdir='E:/nuts/20180502_personal/learn/DeepLearning/mnist_GAN')
File "d:\programdata\anaconda3\envs\l_tf_36\lib\site-packages\spyder_kernels\customize\spydercustomize.py", line 827, in runfile
execfile(filename, namespace)
File "d:\programdata\anaconda3\envs\l_tf_36\lib\site-packages\spyder_kernels\customize\spydercustomize.py", line 110, in execfile
exec(compile(f.read(), filename, 'exec'), namespace)
File "E:/nuts/20180502_personal/learn/DeepLearning/mnist_GAN/try_Mnist_GAN.py", line 94, in
d_loss = tf.add(d_loss_real, d_loss_fake, d_loss_num_real)
File "d:\programdata\anaconda3\envs\l_tf_36\lib\site-packages\tensorflow\python\ops\gen_math_ops.py", line 374, in add
"Add", x=x, y=y, name=name)
File "d:\programdata\anaconda3\envs\l_tf_36\lib\site-packages\tensorflow\python\framework\op_def_library.py", line 394, in _apply_op_helper
with g.as_default(), ops.name_scope(name) as scope:
File "d:\programdata\anaconda3\envs\l_tf_36\lib\site-packages\tensorflow\python\framework\ops.py", line 6088, in __enter__
return self._name_scope.__enter__()
File "d:\programdata\anaconda3\envs\l_tf_36\lib\contextlib.py", line 81, in __enter__
return next(self.gen)
File "d:\programdata\anaconda3\envs\l_tf_36\lib\site-packages\tensorflow\python\framework\ops.py", line 3980, in name_scope
if name:
File "d:\programdata\anaconda3\envs\l_tf_36\lib\site-packages\tensorflow\python\framework\ops.py", line 653, in __bool__
raise TypeError("Using a `tf.Tensor` as a Python `bool` is not allowed. "
TypeError: Using a `tf.Tensor` as a Python `bool` is not allowed. Use `if t is not None:` instead of `if t:` to test if a tensor is defined, and use TensorFlow ops such as tf.cond to execute subgraphs conditioned on the value of a tensor.
# discriminator
d_logits_sig_real, d_logits_max_real, d_num_real, d_outputs_real = get_discriminator(real_img, d_units)
d_logits_sig_fake, d_logits_max_fake, d_num_fake, d_outputs_fake = get_discriminator(g_outputs, d_units, reuse=True)
# discriminator的loss
# 识别真实图片
d_loss_real = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_sig_real, labels=tf.ones_like(d_logits_sig_real)) * (1 - smooth))
# 识别生成的图片
d_loss_fake = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_sig_fake, labels=tf.zeros_like(d_logits_sig_fake)))
# 识别真图值
d_loss_num_real = tf.reduce_mean(tf.nn.sparse_softmax_cross_entropy_with_logits(logits=d_logits_max_real, labels=tf.cast(tf.reduce_sum(real_num, 1), dtype=tf.int32)))
# 总体loss
d_loss = tf.add(d_loss_real, d_loss_fake, d_loss_num_real)
# generator的loss
g_loss_realfake = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_sig_fake, labels=tf.ones_like(d_logits_sig_fake)) * (1 - smooth))
# g图值
g_loss_num = tf.reduce_mean(tf.nn.sparse_softmax_cross_entropy_with_logits(logits=d_logits_max_fake, labels=tf.cast(tf.reduce_sum(noise_num, 1), dtype=tf.int32)))
# 总体loss
g_loss = tf.add(g_loss_realfake, g_loss_num)