情感分类--It is possible to have Graph tensors leak out of the function building context by including a

当init方法有dropout时,函数fit时报以下错误,加上experimental_run_tf_function=False,错误消失,但是不明白这句话的作用:

        self.rnn_cell0 = layers.SimpleRNNCell(units, dropout=0.2)
  • 1

TypeError: An op outside of the function building code is being passed
a “Graph” tensor. It is possible to have Graph tensors
leak out of the function building context by including a
tf.init_scope in your function building code.
For example, the following function will fail:
@tf.function
def has_init_scope():
my_constant = tf.constant(1.)
with tf.init_scope():
added = my_constant * 2
The graph tensor has name: my_rnn/simple_rnn_cell/cond/Identity:0

tensorflow.python.eager.core._SymbolicException: Inputs to eager execution function cannot be Keras symbolic tensors, but found []

修正方法见下文。

SIMPLE RNN LAYER 实现过程

标签: 深度学习

SIMPLE RNN LAYER 实现过程

1.SIMPLE RNN LAYER流程图

 

2.用IMDB做数据集的运行代码

import os
import tensorflow as tf
import numpy as np
from    tensorflow import keras
from    tensorflow.keras import layers

# 对全局随机数生成种子的设置
tf.random.set_seed(22)
# 使用相同的参数,每次生成的随机数都相同
np.random.seed(22)
#只打印错误信息,忽略警告等其他
os.environ['TF_CPP_MIN_LOG_LEVEL'] = '2'
"""
startwith('2.') 这个函数用于判断tf.__version__的版本信息是否以'2.0'开头,返回True或者False
assert 关键字用于判断该关键字后面的表达式返回值,True则不报错,返回False则报错‘AssertionError:
"""
assert tf.__version__.startswith('2.')


batchsz = 128

# 设定常用的单词数目为 10000
total_words = 10000
# 设置每个句子中单词个数的最大值,多余的抛弃80,少于80个单词用padding补充0
max_review_len = 80
# 设置每个单词的编码维度,用100维的向量表示一个单词
embedding_len = 100


"""载入数据集, imdb 是一个关于电影评论的数据集,参数num_words=total_words 表示单词数量为total_words
把超出这个范围的生僻单词视为同一个单词"""
(train_data,train_labels),(test_data,test_labels) = keras.datasets.imdb.load_data(num_words=total_words)

print(train_data,train_labels),(test_data,test_labels)

# train_data: [b, 80]  把train_data中每条评论(句子) padding为统一的长度,不足的话补0,超过的截取
train_data = keras.preprocessing.sequence.pad_sequences(train_data,maxlen=max_review_len)
# test_data : [b, 80]  把test_data中每条评论(句子) padding为统一的长度,不足的话补0,超过的截取
test_data = keras.preprocessing.sequence.pad_sequences(test_data,maxlen=max_review_len)

# 对数据集进行切片处理
db_train = tf.data.Dataset.from_tensor_slices((train_data,train_labels))
# batch()的参数 drop_remainer 设置为 True 表示丢弃最末尾的 batch可能出现不为整数的batch
db_train = db_train.shuffle(1000).batch(batchsz,drop_remainder=True)
db_test = tf.data.Dataset.from_tensor_slices((test_data,test_labels))
db_test = db_test.batch(batchsz,drop_remainder=True)




# 定义MyRNN类
class MyRnn(keras.Model):


    def __init__(self, units):
        # 建立初始化状态
        super(MyRnn,self).__init__()


        #[b, 64],建立列表
        self.state0 = [tf.zeros([batchsz, units])]
        self.state1 = [tf.zeros([batchsz, units])]

        """
        embedding 层,用于数据类型的编码(嵌入表示),第一个参数表示数据中单词数量的总数,
         第二个参数表示每个单词的编码维度,
         第三个单词表示每个句子的长度(全部padding为80了)
         [b, 80] => [b, 80, 100]
         transform text to embedding representation
        """
        self.embedding = layers.Embedding(total_words,embedding_len,input_length=max_review_len)

        """
        定义RNN单元
        # [b, 80, 100] , h_dim: 64
        # units 参数表示Cell的隐含层的维度 h_dim
        # dropout=0.5表示随机丢弃节点,提高效率并且降低过拟合(一般只在training时起作用)
        # [b, 80, 100] => [b, 64]
        """
        self.rnn_cell0 = layers.SimpleRNNCell(units,dropout=0.5)
        self.rnn_cell1 = layers.SimpleRNNCell(units,dropout=0.5)

        # 定义全连接层,用于分类,输入维度为1,即一个节点
        # [b, 64] => [b, 1]

        self.outlayer = layers.Dense(1)


    #定义前向运算
    def call(self,inputs,training=None):
        """
        net(x) net(x, training=True) :train mode
        net(x, training=False): test
        :param inputs: [b, 80]
        :param training:
        :return:
        """
        # [b,80]
        x = inputs
        # embedding: [b, 80] => [b, 80, 100]
        x = self.embedding(x)
        # rnn cell compute
        # [b, 80, 100] => [b, 64]
        state0 = self.state0
        state1 = self.state1

        for word in tf.unstack(x,axis=1):
        # h1 = x*wxh+h0*whh
        # out0: [b, 64]
            out0, state0 = self.rnn_cell0(word,state0,training)
            out1, state1 = self.rnn_cell1(out0,state1,training)
        # out: [b, 64] => [b, 1]
        x =self.outlayer(out1)
        # p(y is pos|x)
        prob = tf.sigmoid(x)

        return prob


def main():
    units = 64
    epochs = 4

    model = MyRnn(units)
    model.compile(optimzer = keras.optimizers.Adam(0.001),
                 loss = tf.losses.BinaryCrossentropy(),
                 metics = ['accuracy'],experimental_run_tf_function =False,
                 )
    # 训练RNN
    model.fit(db_train,epochs=epochs,validation_data =db_test)
    #测试网络
    model.evaluate(db_test)


if __name__=='__main__':
    main()




















你可能感兴趣的:(情感分类--It is possible to have Graph tensors leak out of the function building context by including a)