拿来即可用系列——超简单地构建bert文本分类模型

一、前言

bert作为优秀的预训练模型,在序列标注、文本分类和文本匹配任务中,即使利用少量的标注数据,就可以取得非常好的结果,因此bert也是目前NLP中最火的预训练模型。bert根据参数量大小分为base版,large版和xlarge版,同时bert的演进版有albert,robert。其中albert版可以理解为bert版本的mini版,虽然是mini版但是有时候在同一任务,反而表现的更好(可惜的是,我遇到任务都是bert要更胜一筹),而robert是bert的加强版,在bert的基础上,采用更多更好的预训练数据,同时加大模型的参数量(我遇到的基本robert不会比bert效果差),还有bert-wwm,xlnet,BERT-wwm-ext等等,我建议如果是打比赛的话,可以优先考虑robert和BERT-wwm-ext模型。

二、代码详解

1、这里支持几乎上述所有提到的bert模型,只需要下载后,再指定对应的路径,如下所示:

#预训练模型路径
config_path = 'base_bert/bert_config.json'
checkpoint_path = 'base_bert/bert_model.ckpt'
dict_path = 'base_bert/vocab.txt'

2、生成batch 数据

#data_generator只是一种为了节约内存的数据方式
class data_generator:
    def __init__(self, data, batch_size=3, shuffle=True):
        self.data = data
        self.batch_size = batch_size
        self.shuffle = shuffle
        self.steps = len(self.data) // self.batch_size
        if len(self.data) % self.batch_size != 0:
            self.steps += 1
 
    def __len__(self):
        return self.steps
 
    def __iter__(self):
        while True:
            idxs = list(range(len(self.data)))
 
            if self.shuffle:
                np.random.shuffle(idxs)
 
            X1, X2, Y = [], [], []
            for i in idxs:
                d = self.data[i]
                text = d[0][:maxlen]
                x1, x2 = tokenizer.encode(first=text)
                y = d[1]
                X1.append(x1)
                X2.append(x2)
                Y.append([y])
                if len(X1) == self.batch_size or i == idxs[-1]:
                    X1 = seq_padding(X1)
                    X2 = seq_padding(X2)
                    Y = seq_padding(Y)
                    yield [X1, X2], Y[:, 0, :]
                    [X1, X2, Y] = [], [], []

3、构建bert模型,这里利用keras_bert工具构建,keras_bert 是 CyberZHG 大佬封装好了Keras版的Bert,顺便一提的是,除了keras-bert之外,CyberZHG大佬还封装了很多有价值的keras模块,比如keras-gpt-2(你可以用像用bert一样用gpt2模型了)、keras-lr-multiplier(分层设置学习率)、keras-ordered-neurons(就是前不久介绍的ON-LSTM)等等

#bert模型设置
def build_bert(nclass):
    bert_model = load_trained_model_from_checkpoint(config_path, checkpoint_path, seq_len=None)  #加载预训练模型
 
    for l in bert_model.layers:
        l.trainable = True
 
    x1_in = Input(shape=(None,))
    x2_in = Input(shape=(None,))
 
    x = bert_model([x1_in, x2_in])
    x = Lambda(lambda x: x[:, 0])(x) # 取出[CLS]对应的向量用来做分类
    p = Dense(nclass, activation='softmax')(x)
 
    model = Model([x1_in, x2_in], p)
    model.compile(loss='categorical_crossentropy',
                  optimizer=Adam(1e-5),    #用足够小的学习率
                  metrics=['accuracy', acc_top2])
    print(model.summary())
    return model

4、交叉验证,训练并预测

#交叉验证训练和测试模型
def run_cv(nfold, data, data_labels, data_test):
    kf = KFold(n_splits=nfold, shuffle=True, random_state=520).split(data)
    train_model_pred = np.zeros((len(data), 3))
    test_model_pred = np.zeros((len(data_test), 3))
 
    for i, (train_fold, test_fold) in enumerate(kf):
        X_train, X_valid, = data[train_fold, :], data[test_fold, :]
 
        model = build_bert(3)
        early_stopping = EarlyStopping(monitor='val_acc', patience=3)   #早停法,防止过拟合
        plateau = ReduceLROnPlateau(monitor="val_acc", verbose=1, mode='max', factor=0.5, patience=2) #当评价指标不在提升时,减少学习率
        checkpoint = ModelCheckpoint('./bert_dump/' + str(i) + '.hdf5', monitor='val_acc',verbose=2, save_best_only=True, mode='max', save_weights_only=True) #保存最好的模型
 
        train_D = data_generator(X_train, shuffle=True)
        valid_D = data_generator(X_valid, shuffle=True)
        test_D = data_generator(data_test, shuffle=False)
        #模型训练
        model.fit_generator(
            train_D.__iter__(),
            steps_per_epoch=len(train_D),
            epochs=5,
            validation_data=valid_D.__iter__(),
            validation_steps=len(valid_D),
            callbacks=[early_stopping, plateau, checkpoint],
        )
 
        # model.load_weights('./bert_dump/' + str(i) + '.hdf5')
 
        # return model
        train_model_pred[test_fold, :] = model.predict_generator(valid_D.__iter__(), steps=len(valid_D), verbose=1)
        test_model_pred += model.predict_generator(test_D.__iter__(), steps=len(test_D), verbose=1)
 
        del model
        gc.collect()   #清理内存
        K.clear_session()   #clear_session就是清除一个session
        # break
 
    return train_model_pred, test_model_pred

5、完结,撒花,完整代码下载:https://github.com/ttjjlw/NLP/tree/main/Classify%E5%88%86%E7%B1%BB/bert/keras_bert

如遇问题,可私信我

你可能感兴趣的:(拿来即可用系列❤️,bert,文本分类,keras)