TensorFlow2前向传播碰到的unsupported operand type(s) for *: ‘float‘ and ‘NoneType‘问题

在用TensorFlow2学习前向传播过程中碰到了unsupported operand type(s) for *: ‘float’ and 'NoneType’的问题
本人原出错代码如下:

import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import datasets

(x, y), _ = datasets.mnist.load_data()
x = tf.convert_to_tensor(x, dtype=tf.float32)/255.
y = tf.convert_to_tensor(y, dtype=tf.int32)

train_db = tf.data.Dataset.from_tensor_slices((x, y)).batch(128)



#[b,784] => [b,256] => [b,128] => [b,10]
w1 = tf.Variable(tf.random.truncated_normal([784, 256], stddev=0.1))
b1 = tf.Variable(tf.zeros([256]))
w2 = tf.Variable(tf.random.truncated_normal([256, 128], stddev=0.1))
b2 = tf.Variable(tf.zeros([128]))
w3 = tf.Variable(tf.random.truncated_normal([128, 10], stddev=0.1))
b3 = tf.Variable(tf.zeros([10]))

lr = 1e-3

for epoch in range(5):
    for i, (x, y) in enumerate(train_db):
        x = tf.reshape(x, [-1, 28*28])
        with tf.GradientTape() as tape:
            h1 = tf.nn.relu(x@w1 + b1)
            h2 = tf.nn.relu(h1@w2 + b2)
            out = h2@w3 + b3    #[b,10]

            y_onehot = tf.one_hot(y, depth=10)
            loss = tf.square(y_onehot - out)
            loss = tf.reduce_mean(loss)

        grads = tape.gradient(loss, [w1, b1, w2, b2, w3, b3])
        w1 = w1 - lr * grads[0]
        b1 = b1 - lr * grads[1]
        w2 = w2 - lr * grads[2]
        b2 = b2 - lr * grads[3]
        w3 = w3 - lr * grads[4]
        b3 = b3 - lr * grads[5]

        if i % 200 == 0:
            print(epoch, i, 'loss:', float(loss))

打印结果:TensorFlow2前向传播碰到的unsupported operand type(s) for *: ‘float‘ and ‘NoneType‘问题_第1张图片

问题分析

由上述打印结果发现第一次的loss成功打印,然后报错。
以w1为例来分析问题,初始设置时w1是tf.Variable类型,以上面的方式进行w1更新,更新一次后新得到的w1是一个Tensor,而不再是一个Variable,因此w1无法进行第二次更新。下面进行验证
验证方法是在w1更新语句下面再加上

        print(isinstance(w1, tf.Variable))
        print(isinstance(w1, tf.Tensor))

来验证一次更新后w1的类型,结果如下
TensorFlow2前向传播碰到的unsupported operand type(s) for *: ‘float‘ and ‘NoneType‘问题_第2张图片
可见更新后的w1为Tensor类型而不是Variable。

问题解决

使用tf.Variable.assign_sub(Variable, delta)函数即可解决。
函数功能为实现: Variable = Variable - delta ,其中Variable为变量,delta为值。
使用该函数即可以实现参数变量的原地更新。
只需要将上述的参数更新代码换为

		w1.assign_sub(lr * grads[0])
        b1.assign_sub(lr * grads[1])
        w2.assign_sub(lr * grads[2])
        b2.assign_sub(lr * grads[3])
        w3.assign_sub(lr * grads[4])
        b3.assign_sub(lr * grads[5])

问题即可解决。
结果如下:

0 0 loss: 0.38831597566604614
0 200 loss: 0.200173020362854
0 400 loss: 0.17485864460468292
1 0 loss: 0.1700868308544159
1 200 loss: 0.15430499613285065
1 400 loss: 0.14667990803718567
2 0 loss: 0.14267532527446747
2 200 loss: 0.133304625749588
2 400 loss: 0.12999853491783142
3 0 loss: 0.12561069428920746
3 200 loss: 0.11988484859466553
3 400 loss: 0.1189001053571701
4 0 loss: 0.11386539041996002
4 200 loss: 0.11029555648565292
4 400 loss: 0.1108940988779068

Process finished with exit code 0

你可能感兴趣的:(tensorflow学习,#,tf2碰到的问题)