这是一个二元分类(又称为两类分类)的示例,也是一种重要且广泛适用的机器学习问题。
https://tensorflow.google.cn/tutorials/keras/basic_text_classification
TensorFlow 中包含 IMDB 数据集。我们已对该数据集进行了预处理,将影评(字词序列)转换为整数序列,其中每个整数表示字典中的一个特定字词。
该数据集已经过预处理:每个样本都是一个整数数组,表示影评中的字词。每个标签都是整数值 0 或 1,其中 0 表示负面影评,1 表示正面影评。
创建 text_classify.py
#encoding=utf-8
# https://tensorflow.google.cn/tutorials/keras/basic_text_classification
# TensorFlow 中包含 IMDB 数据集。我们已对该数据集进行了预处理,将影评(字词序列)转换为整数序列,其中每个整数表示字典中的一个特定字词。
# 每个样本都是一个整数数组,表示影评中的字词。每个标签都是整数值 0 或 1,其中 0 表示负面影评,1 表示正面影评。
import tensorflow as tf
from tensorflow import keras
import numpy as np
print(tf.__version__) # 2.0.0-beta1
imdb = keras.datasets.imdb
(train_data, train_labels), (test_data, test_labels) = imdb.load_data(num_words=10000)
print("Training entries: {}, labels: {}".format(len(train_data), len(train_labels)))
# print(train_data[0])
print(train_labels[0]) # 1
print(len(train_data[0]), len(train_data[1]))
# 将整数转换回字词
# A dictionary mapping words to an integer index
word_index = imdb.get_word_index()
# The first indices are reserved
word_index = {k:(v+3) for k,v in word_index.items()}
word_index[""] = 0
word_index[""] = 1
word_index[""] = 2 # unknown
word_index[""] = 3
reverse_word_index = dict([(value, key) for (key, value) in word_index.items()])
def decode_review(text):
return ' '.join([reverse_word_index.get(i, '?') for i in text])
print(decode_review(train_data[0]))
# 准备数据
train_data = keras.preprocessing.sequence.pad_sequences(train_data,
value=word_index[""],
padding='post',
maxlen=256)
test_data = keras.preprocessing.sequence.pad_sequences(test_data,
value=word_index[""],
padding='post',
maxlen=256)
print(len(train_data[0]), len(train_data[1]))
# input shape is the vocabulary count used for the movie reviews (10,000 words)
vocab_size = 10000
model = keras.Sequential()
model.add(keras.layers.Embedding(vocab_size, 16))
model.add(keras.layers.GlobalAveragePooling1D())
model.add(keras.layers.Dense(16, activation='relu'))
model.add(keras.layers.Dense(1, activation='sigmoid'))
model.summary()
model.compile(optimizer='adam',
loss='binary_crossentropy',
metrics=['accuracy'])
x_val = train_data[:10000]
partial_x_train = train_data[10000:]
y_val = train_labels[:10000]
partial_y_train = train_labels[10000:]
history = model.fit(partial_x_train,
partial_y_train,
epochs=10,
batch_size=512,
validation_data=(x_val, y_val),
verbose=1)
results = model.evaluate(test_data, test_labels)
print(results)
# history_dict = history.history
# history_dict.keys()
import matplotlib.pyplot as plt
acc = history.history.get('acc')
val_acc = history.history.get('val_acc')
loss = history.history.get('loss')
val_loss = history.history.get('val_loss')
epochs = range(1, 10 + 1)
# "bo" is for "blue dot"
plt.plot(epochs, loss, 'bo', label='Training loss')
# b is for "solid blue line"
plt.plot(epochs, val_loss, 'b', label='Validation loss')
plt.title('Training and validation loss')
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.legend()
plt.show()
print(model.predict(train_data[:1])) # 预测第一个文本 [[0.7266425]]
调试结果: