我们将使用IMDB数据集,其中包含来自互联网电影数据库的50000条文本。我们将这些文本拆分成训练集和测试集,使它们包含相同的正面和负面影评。
这里使用colab做演示。
导入相应的包
import tensorflow as tf
from tensorflow import keras
import numpy as np
TensorFlow中包含IMDB数据集。我们对数据集进行了预处理,将影评(字词序列)转换成整数序列,其中每个整数表示字典中的一个特定字词。
imdb = keras.datasets.imdb
(train_data, train_labels), (test_data, test_labels) = imdb.load_data(num_words=10000
参数num_words=10000
会保留训练数据中还出现频次在前10000位的字词。为了确保数据规模处于可管理的水平,罕见字词将被舍弃。
了解一下数据格式,该数据集已经过预处理:每个样本都是一个整数数组,表示影评中的字词。每个标签都是整数值0或1,其中0表示负面影评,1表示正面影评。
print("Training entries: {}, labels: {}".format(len(train_data),len(train_labels)))
Training entries: 25000, labels: 25000
影评文本已转换为整数,其中每个整数都表示字典中的一个特定字词。第一条影评如下所示:
print(train_data[0])
[1, 14, 22, 16, 43, 530, 973, 1622, 1385, 65, 458, 4468, 66, 3941, 4, 173, 36, 256, 5, 25, 100, 43, 838, 112, 50, 670, 2, 9, 35, 480, 284, 5, 150, 4, 172, 112, 167, 2, 336, 385, 39, 4, 172, 4536, 1111, 17, 546, 38, 13, 447, 4, 192, 50, 16, 6, 147, 2025, 19, 14, 22, 4, 1920, 4613, 469, 4, 22, 71, 87, 12, 16, 43, 530, 38, 76, 15, 13, 1247, 4, 22, 17, 515, 17, 12, 16, 626, 18, 2, 5, 62, 386, 12, 8, 316, 8, 106, 5, 4, 2223, 5244, 16, 480, 66, 3785, 33, 4, 130, 12, 16, 38, 619, 5, 25, 124, 51, 36, 135, 48, 25, 1415, 33, 6, 22, 12, 215, 28, 77, 52, 5, 14, 407, 16, 82, 2, 8, 4, 107, 117, 5952, 15, 256, 4, 2, 7, 3766, 5, 723, 36, 71, 43, 530, 476, 26, 400, 317, 46, 7, 4, 2, 1029, 13, 104, 88, 4, 381, 15, 297, 98, 32, 2071, 56, 26, 141, 6, 194, 7486, 18, 4, 226, 22, 21, 134, 476, 26, 480, 5, 144, 30, 5535, 18, 51, 36, 28, 224, 92, 25, 104, 4, 226, 65, 16, 38, 1334, 88, 12, 16, 283, 5, 16, 4472, 113, 103, 32, 15, 16, 5345, 19, 178, 32]
影评的长度可能会有所不同,以下代码显示了第一条和第二条影评中的字词数。由于神经网络的输入必须具有相同长度,因此我们需要解决这个问题
len(train_data[0]), len(train_data[1])
(218, 189)
了解如何将整数转换为文本可能很有用,在下列代码中,我们将创建一个辅助函数来查询包含整数到字符串映射的字典对象:
word_index = imdb.get_word_index()
word_index = {k:(v+3) for k,v in word_index.items()}
word_index["" ] = 0
word_index["" ] = 1
word_index["" ] = 2
word_index["" ] = 3
reverse_word_index = dict([(value,key) for (key,value) in word_index.items()])
def decode_review(text):
return ' '.join([reverse_word_index.get(i,'?') for i in text])
Downloading data from https://storage.googleapis.com/tensorflow/tf-keras-datasets/imdb_word_index.json
1646592/1641221 [==============================] - 0s 0us/step
现在,我们可以用decode_review
函数显示第一条影评的文本:
decode_review(train_data[0])
" this film was just brilliant casting location scenery story direction everyone's really suited the part they played and you could just imagine being there robert is an amazing actor and now the same being director father came from the same scottish island as myself so i loved the fact there was a real connection with this film the witty remarks throughout the film were great it was just brilliant so much that i bought the film as soon as it was released for and would recommend it to everyone to watch and the fly fishing was amazing really cried at the end it was so sad and you know what they say if you cry at a film it must have been good and this definitely was also to the two little boy's that played the of norman and paul they were just brilliant children are often left out of the list i think because the stars that play them all grown up are such a big profile for the whole film but these children are amazing and should be praised for what they have done don't you think the whole story was so lovely because it was true and was someone's life after all that was shared with us all"
影评(整数数组)必须转换成张量,然后才能输入到神经网络中。我们可以通过以下两种方法实现这种转换:
num_word * num_reviews
的矩阵。max_length * num_reviews
的整数张量。我们可以使用一个能够处理这种形状的嵌入层作为网络中的第一层。我们使用第二种方法。由于影评长度必须相同,我们将使用pad_sequence
函数将长度标准化:
train_data = keras.preprocessing.sequence.pad_sequences(train_data,
value=word_index["" ],
padding='post',
maxlen=256)
test_data = keras.preprocessing.sequence.pad_sequences(test_data,
value=word_index["" ],
padding='post',
maxlen=256)
现在,我们来看样本的长度:
len(train_data[0]),len(train_data[1])
(256, 256)
并检查第一条影评:
print(train_data[0])
[ 1 14 22 16 43 530 973 1622 1385 65 458 4468 66 3941
4 173 36 256 5 25 100 43 838 112 50 670 2 9
35 480 284 5 150 4 172 112 167 2 336 385 39 4
172 4536 1111 17 546 38 13 447 4 192 50 16 6 147
2025 19 14 22 4 1920 4613 469 4 22 71 87 12 16
43 530 38 76 15 13 1247 4 22 17 515 17 12 16
626 18 2 5 62 386 12 8 316 8 106 5 4 2223
5244 16 480 66 3785 33 4 130 12 16 38 619 5 25
124 51 36 135 48 25 1415 33 6 22 12 215 28 77
52 5 14 407 16 82 2 8 4 107 117 5952 15 256
4 2 7 3766 5 723 36 71 43 530 476 26 400 317
46 7 4 2 1029 13 104 88 4 381 15 297 98 32
2071 56 26 141 6 194 7486 18 4 226 22 21 134 476
26 480 5 144 30 5535 18 51 36 28 224 92 25 104
4 226 65 16 38 1334 88 12 16 283 5 16 4472 113
103 32 15 16 5345 19 178 32 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0]
输入数据由字词-索引数组构成。要预测的标签是0或1。接下来,我们为此问题构建一个模型:
# 输入形状是用于电影评论的词汇量(10,000个单词)
vocab_size = 10000
model = keras.Sequential()
model.add(keras.layers.Embedding(vocab_size,16))
model.add(keras.layers.GlobalAveragePooling1D())
model.add(keras.layers.Dense(16,activation=tf.nn.relu))
model.add(keras.layers.Dense(1,activation=tf.nn.sigmoid))
model.summary()
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
embedding (Embedding) (None, None, 16) 160000
_________________________________________________________________
global_average_pooling1d (Gl (None, 16) 0
_________________________________________________________________
dense (Dense) (None, 16) 272
_________________________________________________________________
dense_1 (Dense) (None, 1) 17
=================================================================
Total params: 160,289
Trainable params: 160,289
Non-trainable params: 0
按顺序堆叠各个层以构建分类器:
(batch,sequence,embedding)
。GlobalAveragePooling1D
层通过对序列维度求平均值,针对每个样本返回一个长度固定的输出向量。这样,模型便能够以尽可能简单的方式处理各种长度的输入。(Dense)
层(包含16个隐藏单元)。sigmoid
激活函数后,结果是介于0到1之间的浮点值,表示概率或者置信水平。隐藏单元
上述模型在输入和输出之间有两个中间层(也称为"隐藏层")。输出(单元、节点或神经元)的数量是相应层的表示法空间的维度。换句话说,该数值表示学习内部表示法时网络所允许的自由度。
如果模型具有更多隐藏单元(更高维度的表示空间)和/或更多层,则说明网络可以学习更复杂的表示法。不过,这会使网络耗费更多计算资源,并且可能导致学习不必要的模式(可以优化在训练数据上的表现,但不会优化在测试数据上的表现)。这称为过拟合。
损失函数和优化器
模型在训练时需要一个损失函数和一个优化器。由于这是一个二元分类问题且模型会输出一个概率(应用 S 型激活函数的单个单元层),因此我们将使用binary_crossentropy
损失函数。
该函数并不是唯一的损失函数,例如,可以选择mean_squared_error
。但一般来说,binary_crossentropy
更适合处理概率问题,它可测量概率分布之间的“差距”,在本例中则为实际分布和预测之间的“差距”。
现在,配置模型以使用优化器和损失函数:
model.compile(optimizer=tf.train.AdamOptimizer(),
loss='binary_crossentropy',
metrics=['accuracy'])
在训练时,我们需要检查模型处理从未见过的数据的准确率。我们从原始训练数据中分离出 10000 个样本,创建一个验证集。
x_val = train_data[:10000]
partial_x_train = train_data[10000:]
y_val = train_labels[:10000]
partial_y_train = train_labels[10000:]
用有 512 个样本的小批次训练模型 40 个周期。这将对 x_train 和 y_train 张量中的所有样本进行 40 次迭代。在训练期间,监控模型在验证集的 10000 个样本上的损失和准确率:
history = model.fit(partial_x_train,
partial_y_train,
epochs=40,
batch_size=512,
validation_data=(x_val,y_val),
verbose=1)
Train on 15000 samples, validate on 10000 samples
Epoch 1/40
15000/15000 [==============================] - 1s 88us/step - loss: 0.6918 - acc: 0.5989 - val_loss: 0.6892 - val_acc: 0.7238
Epoch 2/40
15000/15000 [==============================] - 1s 70us/step - loss: 0.6847 - acc: 0.7355 - val_loss: 0.6799 - val_acc: 0.6934
Epoch 3/40
15000/15000 [==============================] - 1s 71us/step - loss: 0.6710 - acc: 0.7478 - val_loss: 0.6629 - val_acc: 0.7382
Epoch 4/40
15000/15000 [==============================] - 1s 70us/step - loss: 0.6475 - acc: 0.7506 - val_loss: 0.6368 - val_acc: 0.7743
Epoch 5/40
15000/15000 [==============================] - 1s 70us/step - loss: 0.6136 - acc: 0.7957 - val_loss: 0.6007 - val_acc: 0.7859
Epoch 6/40
15000/15000 [==============================] - 1s 70us/step - loss: 0.5710 - acc: 0.8147 - val_loss: 0.5597 - val_acc: 0.7971
Epoch 7/40
15000/15000 [==============================] - 1s 70us/step - loss: 0.5238 - acc: 0.8349 - val_loss: 0.5163 - val_acc: 0.8233
Epoch 8/40
15000/15000 [==============================] - 1s 71us/step - loss: 0.4765 - acc: 0.8525 - val_loss: 0.4751 - val_acc: 0.8363
Epoch 9/40
15000/15000 [==============================] - 1s 70us/step - loss: 0.4332 - acc: 0.8634 - val_loss: 0.4390 - val_acc: 0.8480
Epoch 10/40
15000/15000 [==============================] - 1s 70us/step - loss: 0.3946 - acc: 0.8770 - val_loss: 0.4086 - val_acc: 0.8548
Epoch 11/40
15000/15000 [==============================] - 1s 71us/step - loss: 0.3623 - acc: 0.8851 - val_loss: 0.3860 - val_acc: 0.8582
Epoch 12/40
15000/15000 [==============================] - 1s 71us/step - loss: 0.3353 - acc: 0.8915 - val_loss: 0.3639 - val_acc: 0.8656
Epoch 13/40
15000/15000 [==============================] - 1s 70us/step - loss: 0.3109 - acc: 0.8979 - val_loss: 0.3483 - val_acc: 0.8695
Epoch 14/40
15000/15000 [==============================] - 1s 70us/step - loss: 0.2907 - acc: 0.9025 - val_loss: 0.3346 - val_acc: 0.8729
Epoch 15/40
15000/15000 [==============================] - 1s 70us/step - loss: 0.2731 - acc: 0.9075 - val_loss: 0.3240 - val_acc: 0.8745
Epoch 16/40
15000/15000 [==============================] - 1s 71us/step - loss: 0.2581 - acc: 0.9119 - val_loss: 0.3154 - val_acc: 0.8772
Epoch 17/40
15000/15000 [==============================] - 1s 70us/step - loss: 0.2437 - acc: 0.9169 - val_loss: 0.3081 - val_acc: 0.8782
Epoch 18/40
15000/15000 [==============================] - 1s 72us/step - loss: 0.2312 - acc: 0.9221 - val_loss: 0.3022 - val_acc: 0.8809
Epoch 19/40
15000/15000 [==============================] - 1s 71us/step - loss: 0.2196 - acc: 0.9259 - val_loss: 0.2977 - val_acc: 0.8819
Epoch 20/40
15000/15000 [==============================] - 1s 71us/step - loss: 0.2093 - acc: 0.9291 - val_loss: 0.2936 - val_acc: 0.8822
Epoch 21/40
15000/15000 [==============================] - 1s 71us/step - loss: 0.1995 - acc: 0.9329 - val_loss: 0.2904 - val_acc: 0.8829
Epoch 22/40
15000/15000 [==============================] - 1s 71us/step - loss: 0.1904 - acc: 0.9369 - val_loss: 0.2884 - val_acc: 0.8827
Epoch 23/40
15000/15000 [==============================] - 1s 70us/step - loss: 0.1822 - acc: 0.9396 - val_loss: 0.2868 - val_acc: 0.8831
Epoch 24/40
15000/15000 [==============================] - 1s 70us/step - loss: 0.1739 - acc: 0.9444 - val_loss: 0.2850 - val_acc: 0.8851
Epoch 25/40
15000/15000 [==============================] - 1s 70us/step - loss: 0.1668 - acc: 0.9477 - val_loss: 0.2842 - val_acc: 0.8849
Epoch 26/40
15000/15000 [==============================] - 1s 70us/step - loss: 0.1596 - acc: 0.9495 - val_loss: 0.2839 - val_acc: 0.8846
Epoch 27/40
15000/15000 [==============================] - 1s 70us/step - loss: 0.1535 - acc: 0.9521 - val_loss: 0.2847 - val_acc: 0.8840
Epoch 28/40
15000/15000 [==============================] - 1s 72us/step - loss: 0.1473 - acc: 0.9552 - val_loss: 0.2840 - val_acc: 0.8861
Epoch 29/40
15000/15000 [==============================] - 1s 70us/step - loss: 0.1415 - acc: 0.9569 - val_loss: 0.2848 - val_acc: 0.8860
Epoch 30/40
15000/15000 [==============================] - 1s 72us/step - loss: 0.1365 - acc: 0.9588 - val_loss: 0.2862 - val_acc: 0.8863
Epoch 31/40
15000/15000 [==============================] - 1s 70us/step - loss: 0.1305 - acc: 0.9615 - val_loss: 0.2874 - val_acc: 0.8863
Epoch 32/40
15000/15000 [==============================] - 1s 70us/step - loss: 0.1258 - acc: 0.9638 - val_loss: 0.2892 - val_acc: 0.8857
Epoch 33/40
15000/15000 [==============================] - 1s 70us/step - loss: 0.1205 - acc: 0.9663 - val_loss: 0.2910 - val_acc: 0.8851
Epoch 34/40
15000/15000 [==============================] - 1s 71us/step - loss: 0.1161 - acc: 0.9683 - val_loss: 0.2938 - val_acc: 0.8846
Epoch 35/40
15000/15000 [==============================] - 1s 70us/step - loss: 0.1122 - acc: 0.9683 - val_loss: 0.2952 - val_acc: 0.8854
Epoch 36/40
15000/15000 [==============================] - 1s 70us/step - loss: 0.1074 - acc: 0.9715 - val_loss: 0.2981 - val_acc: 0.8842
Epoch 37/40
15000/15000 [==============================] - 1s 70us/step - loss: 0.1036 - acc: 0.9725 - val_loss: 0.3010 - val_acc: 0.8841
Epoch 38/40
15000/15000 [==============================] - 1s 71us/step - loss: 0.1003 - acc: 0.9736 - val_loss: 0.3040 - val_acc: 0.8825
Epoch 39/40
15000/15000 [==============================] - 1s 70us/step - loss: 0.0961 - acc: 0.9754 - val_loss: 0.3061 - val_acc: 0.8832
Epoch 40/40
15000/15000 [==============================] - 1s 69us/step - loss: 0.0926 - acc: 0.9775 - val_loss: 0.3096 - val_acc: 0.8834
评估模型
results = model.evaluate(test_data,test_labels)
print(results)
25000/25000 [==============================] - 1s 40us/step
[0.3304368497276306, 0.87272]
model.fit()
返回一个History
对象,该对象包含一个字典,其中包括训练期间发生的所有情况:
history_dict = history.history
history_dict.keys()
dict_keys(['val_loss', 'val_acc', 'loss', 'acc'])
一共有 4 个条目:每个条目对应训练和验证期间的一个受监控指标。我们可以使用这些指标绘制训练损失与验证损失图表以进行对比,并绘制训练准确率与验证准确率图表:
import matplotlib.pyplot as plt
acc = history.history['acc']
val_acc = history.history['val_acc']
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs = range(1,len(acc)+1)
plt.plot(epochs,loss,'bo',label='Training loss')
plt.plot(epochs,val_loss,'r',label='Validation loss')
plt.title('Training and validation loss')
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.legend()
plt.show()
plt.clf()
acc_values = history_dict['acc']
val_acc_values = history_dict['val_acc']
plt.plot(epochs,acc,'bo',label='Training acc')
plt.plot(epochs,val_acc,'r',label='Validation acc')
plt.title('Training and validation accuracy')
plt.xlabel('Epochs')
plt.ylabel('Accuracy')
plt.legend()
plt.show()
可以注意到,训练损失随着周期数的增加而降低,训练准确率随着周期数的增加而提高。在使用梯度下降法优化模型时,这属于正常现象 - 该方法应在每次迭代时尽可能降低目标值。
验证损失和准确率的变化情况并非如此,它们似乎在大约 20 个周期后达到峰值。这是一种过拟合现象:模型在训练数据上的表现要优于在从未见过的数据上的表现。在此之后,模型会过度优化和学习特定于训练数据的表示法,而无法泛化到测试数据。
对于这种特殊情况,我们可以在大约 20 个周期后停止训练,防止出现过拟合。