cnn完成文本分类

1.cnn结构

(1)输入层(input layer)

图中是一个图形识别的CNN模型。可以看出最左边的船的图像就是我们的输入层,计算机理解为输入若干个矩阵,这点和DNN基本相同。

(2)卷积层(Convolution Layer)

这个是CNN特有的,卷积层中每一个结点的输入只是上一层神经网络的一小块,这个小块常用大小有3x3和5x5.一般来说,通过卷积层处理过的节点会使得矩阵变的更深。卷积层的激活函数使用的是ReLU。我们在DNN中介绍过ReLU的激活函数,它其实很简单,就是ReLU(x)=max(0,x)。我们后面专门来讲。

(3)池化层(pooling layer)

在卷积层后面是池化层(Pooling layer),这个也是CNN特有的,我们后面也会专门来讲。需要注意的是,池化层没有激活函数。他不会改变三维矩阵的深度,但是可以缩小矩阵的大小,从而达到减少整个网络中参数的目的。

卷积层+池化层的组合可以在隐藏层出现很多次,上图中出现两次。而实际上这个次数是根据模型的需要而来的。当然我们也可以灵活使用使用卷积层+卷积层,或者卷积层+卷积层+池化层的组合,这些在构建模型的时候没有限制。但是最常见的CNN都是若干卷积层+池化层的组合,如上图中的CNN结构。

(4)全连接层(Fully Connected Layer)& Softmax层

在若干卷积层+池化层后面是全连接层(Fully Connected Layer, 简称FC),全连接层其实就是我们讲的DNN结构,只是输出层使用了Softmax激活函数来做图像识别的分类,这点和DNN中也一样。

2.TextCNN结构

TextCNN的结构比较简单,输入数据首先通过一个embedding layer,得到输入语句的embedding表示,然后通过一个convolution layer,提取语句的特征,最后通过一个fully connected layer得到最终的输出,整个模型的结构如下图:

cnn完成文本分类_第1张图片

3.模型的效果评估与调优

针对分类问题,一般可以使用准确率、召回率、F1值、混淆矩阵等指标,在文本多标签分类中一般还会考虑标签的位置加权等问题。分类模型中的主要参数:词向量的维度、卷积核的个数、卷积核的窗口值、L2的参数、DropOut的参数、学习率等。这是在模型优化的过程中需要重点关注的参数。此外,一般数据集的类别不均衡问题对模型的影响也是比较显著的,可以尝试使用不同的方法,评估不同方案的模型效果。

4. 文本分类中经常遇到的问题

1.数据集类别不均衡即语料集中,各个类别下的样本数量差异较大,会影响最终文本分类模型的效果。主要存在两类解决方案:(1)调整数据:数据增强处理,NLP中一般随分词后的词序列进行随机的打乱顺序、丢弃某些词汇然后分层的采样的方式来构造新的样本数据。(2)使用代价敏感函数:例如图像识别中的Focal Loss等。2.文本分类模型的泛化能力首先,对于一个未知的样本数据,分类模型只能给出分类标签中的一个,无法解决不属于分类标签体系的样本。我们无法预知未来的数据会是什么样的,也不能保证未来的所有分类情况在训练集中都已经出现过!剩下影响分类模型泛化能力的就是模型过拟合的问题了。如何防止过拟合?那就是老生常谈的问题了:(1)数据上:交叉验证(2)模型上:使用DropOut、BatchNorm、正则项、Early Stop。

理论详细参考:https://blog.csdn.net/v_july_v/article/details/51812459

import tensorflow as tf
import numpy as np
import os
import time
import datetime
import data_loader
from cnn_graph import TextCNN
from tensorflow.contrib import learn
from sklearn import cross_validation
import preprocessing

# Model Hyperparameters
tf.flags.DEFINE_integer("embedding_dim", 200, "Dimensionality of character embedding (default: 128)")
tf.flags.DEFINE_string("filter_sizes", "3,4,5", "Comma-separated filter sizes (default: '3,4,5')")
tf.flags.DEFINE_integer("num_filters", 40, "Number of filters per filter size (default: 128)")
tf.flags.DEFINE_float("dropout_keep_prob", 0.5, "Dropout keep probability (default: 0.5)")
tf.flags.DEFINE_float("l2_reg_lambda", 3.0, "L2 regularizaion lambda (default: 0.0)")

# Training parameters
tf.flags.DEFINE_integer("batch_size", 50, "Batch Size (default: 64)")
tf.flags.DEFINE_integer("num_epochs", 100, "Number of training epochs (default: 200)")
tf.flags.DEFINE_integer("evaluate_every", 100, "Evaluate model on dev set after this many steps (default: 100)")
tf.flags.DEFINE_integer("checkpoint_every", 100, "Save model after this many steps (default: 100)")
# Misc Parameters
tf.flags.DEFINE_boolean("allow_soft_placement", True, "Allow device soft device placement")
tf.flags.DEFINE_boolean("log_device_placement", False, "Log placement of ops on devices")

# w2v文件路径
tf.flags.DEFINE_string("w2v_path", "./w2v_model/retrain_vectors_100.bin", "w2v file")
tf.flags.DEFINE_string("file_dir","./data_process/jd","train/test dataSet")

FLAGS = tf.flags.FLAGS
FLAGS._parse_flags()
print("\nParameters:")
for attr, value in sorted(FLAGS.__flags.items()):
    print("{}={}".format(attr.upper(), value))
print("")


# Data Preparatopn
# ==================================================

# Load data
print("Loading data...")
files = ["reviews.neg","reviews.pos"]
# 加载所有的未切分的数据
x_text, y_labels,neg_examples,pos_examples = data_loader.\
    load_data_and_labels(data_dir=FLAGS.file_dir,files=files,splitable=False)

# 获取消极数据的2/3,得到的评论的长度离散度更低
neg_accept_length = preprocessing.freq_factor(neg_examples,
                                         percentage=0.8, drawable=False)
neg_accept_length = [item[0] for item in neg_accept_length]
neg_examples = data_loader.load_data_by_length(neg_examples,neg_accept_length)

# 获取积极数据的2/3,得到的评论的长度离散度更低
pos_accept_length = preprocessing.freq_factor(pos_examples,
                                         percentage=0.8, drawable=False)
pos_accept_length = [item[0] for item in pos_accept_length]
pos_examples = data_loader.load_data_by_length(pos_examples,pos_accept_length)

x_text = neg_examples + pos_examples
neg_labels = [[1,0] for _ in neg_examples]
pos_labels = [[0,1] for _ in pos_examples]
y_labels = np.concatenate([neg_labels,pos_labels], axis=0)
print("Loading data finish")

# Build vocabulary
max_document_length = max([len(x.split(" ")) for x in x_text]) # 最长的句子的长度
print(max_document_length)
vocab_processor = learn.preprocessing.VocabularyProcessor(max_document_length)
x = np.array(list(vocab_processor.fit_transform(x_text)))

# 加载提前训练的w2v数据集
word_vecs = data_loader.load_bin_vec(fname=FLAGS.w2v_path,
                         vocab=list(vocab_processor.vocabulary_._mapping),
                                     ksize=FLAGS.embedding_dim)
# 加载嵌入层的table
W = data_loader.get_W(word_vecs=word_vecs,
                  vocab_ids_map=vocab_processor.vocabulary_._mapping,
                  k=FLAGS.embedding_dim,is_rand=False)

# 随机化数据
np.random.seed(10)
shuffle_indices = np.random.permutation(np.arange(len(y_labels)))
x_shuffled = x[shuffle_indices]
y_shuffled = y_labels[shuffle_indices]

out_path = os.path.abspath(os.path.join(os.path.curdir, "runs","parameters"))
parameters = "新全连接+jd数据+10\n" \
             "embedding_dim: {},\n" \
             "filter_sizes:{},\n" \
             "num_filters:{},\n" \
             "dropout_keep_prob:{},\n" \
             "l2_reg_lambda:{},\n" \
             "num_epochs:{},\n" \
             "batch_size:{}".format(FLAGS.embedding_dim,FLAGS.filter_sizes,FLAGS.num_filters,
                                    FLAGS.dropout_keep_prob,FLAGS.l2_reg_lambda,FLAGS.num_epochs,
                                    FLAGS.batch_size)
open(out_path, 'w').write(parameters)
def train(X_train, X_dev, x_test, y_train, y_dev, y_test):
    with tf.Graph().as_default():
        session_conf = tf.ConfigProto(
          allow_soft_placement=FLAGS.allow_soft_placement,
          log_device_placement=FLAGS.log_device_placement)
        sess = tf.Session(config=session_conf)
        with sess.as_default():
            cnn = TextCNN(
                sequence_length=max_document_length,
                num_classes=2,
                vocab_size=len(vocab_processor.vocabulary_),
                embedding_size=FLAGS.embedding_dim,
                embedding_table=W,
                filter_sizes=list(map(int, FLAGS.filter_sizes.split(","))),
                num_filters=FLAGS.num_filters,
                l2_reg_lambda=FLAGS.l2_reg_lambda)

            # Define Training procedure
            global_step = tf.Variable(0, name="global_step", trainable=False)
            optimizer = tf.train.AdamOptimizer(1e-3)
            grads_and_vars = optimizer.compute_gradients(cnn.loss)
            train_op = optimizer.apply_gradients(grads_and_vars, global_step=global_step)

            # Keep track of gradient values and sparsity (optional)
            grad_summaries = []
            for g, v in grads_and_vars:
                if g is not None:
                    grad_hist_summary = tf.summary.histogram("{}/grad/hist".format(v.name), g)
                    sparsity_summary = tf.summary.scalar("{}/grad/sparsity".format(v.name), tf.nn.zero_fraction(g))
                    grad_summaries.append(grad_hist_summary)
                    grad_summaries.append(sparsity_summary)
            grad_summaries_merged = tf.summary.merge(grad_summaries)

            # Output directory for models and summaries
            timestamp = str(int(time.time()))
            out_dir = os.path.abspath(os.path.join(os.path.curdir, "runs", timestamp))
            print("Writing to {}\n".format(out_dir))

            # Summaries for loss and accuracy
            loss_summary = tf.summary.scalar("loss", cnn.loss)
            acc_summary = tf.summary.scalar("accuracy", cnn.accuracy)

            # Train Summaries
            train_summary_op = tf.summary.merge([loss_summary, acc_summary, grad_summaries_merged])
            train_summary_dir = os.path.join(out_dir, "summaries", "train")
            train_summary_writer = tf.summary.FileWriter(train_summary_dir, sess.graph)

            # Dev summaries
            dev_summary_op = tf.summary.merge([loss_summary, acc_summary])
            dev_summary_dir = os.path.join(out_dir, "summaries", "dev")
            dev_summary_writer = tf.summary.FileWriter(dev_summary_dir, sess.graph)


            # Checkpoint directory. Tensorflow assumes this directory already exists so we need to create it
            checkpoint_dir = os.path.abspath(os.path.join(out_dir, "checkpoints"))
            checkpoint_prefix = os.path.join(checkpoint_dir, "model")
            if not os.path.exists(checkpoint_dir):
                os.makedirs(checkpoint_dir)
            saver = tf.train.Saver(tf.global_variables())

            # Write vocabulary
            vocab_processor.save(os.path.join(out_dir, "vocab"))

            # Initialize all variables
            # sess.run(tf.initialize_all_variables())
            sess.run(tf.global_variables_initializer())

            def train_step(x_batch, y_batch):
                """
                A single training step
                """
                feed_dict = {
                  cnn.input_x: x_batch,
                  cnn.input_y: y_batch,
                  cnn.dropout_keep_prob: FLAGS.dropout_keep_prob
                }
                _, step, summaries, loss, accuracy = sess.run(
                    [train_op, global_step, train_summary_op, cnn.loss, cnn.accuracy],
                    feed_dict)
                # _, step, loss, accuracy = sess.run(
                #     [train_op, global_step, cnn.loss, cnn.accuracy],
                #     feed_dict)
                time_str = datetime.datetime.now().isoformat()
                print("{}: step {}, loss {:g}, acc {:g}".format(time_str, step, loss, accuracy))
                train_summary_writer.add_summary(summaries, step)

            def dev_step(x_batch, y_batch, writer=None):
                """
                Evaluates model on a dev set
                """
                feed_dict = {
                  cnn.input_x: x_batch,
                  cnn.input_y: y_batch,
                  cnn.dropout_keep_prob: 1.0
                }
                step, summaries, loss, accuracy = sess.run(
                    [global_step, dev_summary_op, cnn.loss, cnn.accuracy],
                    feed_dict)
                # step, loss, accuracy = sess.run(
                #     [global_step, cnn.loss, cnn.accuracy],
                #     feed_dict)
                time_str = datetime.datetime.now().isoformat()
                print("{}: step {}, loss {:g}, acc {:g}".format(time_str, step, loss, accuracy))
                if writer:
                    writer.add_summary(summaries, step)



            # Generate batches
            batches = data_loader.batch_iter(
                list(zip(X_train, y_train)), FLAGS.batch_size, FLAGS.num_epochs)
            # Training loop. For each batch...
            for batch in batches:
                x_batch, y_batch = zip(*batch)
                train_step(x_batch, y_batch)
                current_step = tf.train.global_step(sess, global_step)
                if current_step % FLAGS.evaluate_every == 0:
                    print("\nEvaluation:")
                    dev_step(X_dev, y_dev, writer=dev_summary_writer)
                    # dev_step(X_dev, y_dev, writer=None)
                    print("")
                if current_step % FLAGS.checkpoint_every == 0:
                    path = saver.save(sess, checkpoint_prefix, global_step=current_step)
                    print("Saved model checkpoint to {}\n".format(path))

            # Test loop
            # Generate batches for one epoch
            batches = data_loader.batch_iter(list(x_test), FLAGS.batch_size, 1, shuffle=False)
            # Collect the predictions here
            all_predictions = []
            for x_test_batch in batches:
                batch_predictions = sess.run(cnn.predictions, {cnn.input_x: x_test_batch, cnn.dropout_keep_prob: 1.0})
                all_predictions = np.concatenate([all_predictions, batch_predictions])

            correct_predictions = float(sum(
                all_predictions == np.argmax(y_test,axis=1)))

            print("Total number of test examples: {}".format(len(y_test)))
            print("Accuracy: {:g}".format(correct_predictions / float(len(y_test))))
            # open(os.path.join(out_dir,"test"),'a').write("Accuracy: {:g}".format(correct_predictions / float(len(y_test))))
            out_path = os.path.abspath(os.path.join(os.path.curdir, "runs","test"))
            open(out_path,'a').write("{:g},".format(correct_predictions / float(len(y_test))))
            print("\n写入成功!\n")


# cross-validation
kf = cross_validation.KFold(len(x_shuffled), n_folds=3)
for train_index, test_index in kf:
    X_train_total = x_shuffled[train_index]
    y_train_total = y_shuffled[train_index]
    x_test = x_shuffled[test_index]
    y_test = y_shuffled[test_index]

    # 分割训练集与验证集
    X_train, X_dev, y_train, y_dev = cross_validation.train_test_split(
        X_train_total, y_train_total, test_size=0.2, random_state=0)
            print("Vocabulary Size: {:d}".format(len(vocab_processor.vocabulary_)))
    print("Train/Dev split: {:d}/{:d}".format(len(y_train), len(y_dev)))

你可能感兴趣的:(nlp)