多分类的几种实现方式

尊敬的读者您好:笔者很高兴自己的文章能被阅读,但原创与编辑均不易,所以转载请必须注明本文出处并附上本文地址超链接以及博主博客地址:https://blog.csdn.net/vensmallzeng。若觉得本文对您有益处还请帮忙点个赞鼓励一下,笔者在此感谢每一位读者,如需联系笔者,请记下邮箱:[email protected],谢谢合作!

       之前做过一个关于’非房产品个性化推荐‘的项目,其中涉及到多分类的实践,这里主要讲下多分类的几种实现方式。

1、二分类模型间接实现多分类

     将多分类任务拆为若干个二分类任务进行求解,在预测时对这些分类器的预测结果进行集成以获得最终的多分类结果。

     主流的拆分策略有三种:

      ① 一对一(One vs. One, 简称OvO)

           OvO将N个类别两两配对,从而产生N(N-1)/2个二分类任务。在测试阶段,新样本将同时提交给所有分类器,得到N(N-1)/2个分类结果,最终结果可由投票产生(或根据各分类器的预测置信度等信息进行集成)。           

      ② 一对其余(One vs. Rest, 简称OvR)

           OvR每次将一个类的样例作为正例、所有其他类的样例作为反例来训练N个分类器。在测试时若只有一个分类器预测为正类,则对应的类别标记作为最终分类结果。若有多个分类器预测为正类,则通常考虑各分类器的预测置信度,选择置信度最大的类别标记作为分类结果。

      ③ 多对多(Many vs. Many, 简称MvM)

          MvM是每次将若干个类作为正类,若干个其他类作为反类。

各自优缺点:

OvO的存储开销和测试时间开销通常比OvR更大,但在训练时,OvR的每个分类器均使用全部训练样例,而OvO的每个分类器仅用到两个类的样例,因此,在类别很多时,OvO的训练时间开销通常比OvR更小。至于预测性能,则取决于具体的数据分布,在多数情形下两者差不多。OvO和OvR是MvM的特例。MvM的正、反类构造必须有特殊的设计,不能随意选取。

2、多分类模型直接实现多分类

      直接采用现有的多分类模型如LightGBM或者深度模型神经网络进行多分类。

      ① LightGBM实现多分类

           修改训练样本label为0,1,...,N(必须从0开始),其中0代表一类,以此类推。同时需要修改objective参数为multiclass,主要参数如下:

params = {'num_leaves': 60,
          'min_data_in_leaf': 30,
          'objective': 'multiclass',
          'num_class': 33,
          'max_depth': -1,
          'learning_rate': 0.03,
          "min_sum_hessian_in_leaf": 6,
          "boosting": "gbdt",
          "feature_fraction": 0.9,
          "bagging_freq": 1,
          "bagging_fraction": 0.8,
          "bagging_seed": 11,
          "lambda_l1": 0.1,
          "verbosity": -1,
          "nthread": 15,
          'metric': 'multi_logloss',
          "random_state": 2019,
          # 'device': 'gpu' 
          }

        ② 神经网络实现多分类

             神经网络有多种实现方式,比如说用Pytorch、TensorFlow等,这里采用Keras进行尝试,主要代码如下:

#!usr/bin/env python  
#-*- coding:utf-8 _*- 


import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from sklearn.preprocessing import LabelEncoder, StandardScaler
from sklearn.model_selection import KFold
import gc

from keras.models import Sequential
from keras.layers import Dense,BatchNormalization,Dropout
from keras.utils import to_categorical
from keras import backend as K
import keras



## load data
train_data = pd.read_csv('../../data/train.csv')
test_data = pd.read_csv('../../data/test.csv')
epochs = 3
batch_size = 1024
classes = 33


## category feature one_hot
test_data['label'] = -1
data = pd.concat([train_data, test_data])
cate_feature = ['gender', 'cell_province', 'id_province', 'id_city', 'rate', 'term']
for item in cate_feature:
    data[item] = LabelEncoder().fit_transform(data[item])
    item_dummies = pd.get_dummies(data[item])
    item_dummies.columns = [item + str(i + 1) for i in range(item_dummies.shape[1])]
    data = pd.concat([data, item_dummies], axis=1)
data.drop(cate_feature,axis=1,inplace=True)

train = data[data['label'] != -1]
test = data[data['label'] == -1]

##Clean up the memory
del data, train_data, test_data
gc.collect()

## get train feature
del_feature = ['auditing_date', 'due_date', 'label']
features = [i for i in train.columns if i not in del_feature]


train_x = train[features]
train_y = train['label'].values
test = test[features]

## Fill missing value
for i in train_x.columns:
    # print(i, train_x[i].isnull().sum(), test[i].isnull().sum())
    if train_x[i].isnull().sum() != 0:
        train_x[i].fillna(-1, inplace=True)
        test[i].fillna(-1, inplace=True)

## normalized
scaler = StandardScaler()
train_X = scaler.fit_transform(train_x)
test_X = scaler.transform(test)

##label one_hot
y_categorical = to_categorical(train_y)

## simple mlp model
K.clear_session()
def MLP(dropout_rate=0.25, activation='relu'):
    start_neurons = 512
    model = Sequential()
    model.add(Dense(start_neurons, input_dim=train_X.shape[1], activation=activation))
    model.add(BatchNormalization())
    model.add(Dropout(dropout_rate))

    model.add(Dense(start_neurons // 2, activation=activation))
    model.add(BatchNormalization())
    model.add(Dropout(dropout_rate))

    model.add(Dense(start_neurons // 4, activation=activation))
    model.add(BatchNormalization())
    model.add(Dropout(dropout_rate))

    model.add(Dense(start_neurons // 8, activation=activation))
    model.add(BatchNormalization())
    model.add(Dropout(dropout_rate / 2))

    model.add(Dense(classes, activation='softmax'))
    return model


def plot_loss_acc(history, fold):
    plt.plot(history.history['loss'][1:])
    plt.plot(history.history['val_loss'][1:])
    plt.title('model loss')
    plt.ylabel('val_loss')
    plt.xlabel('epoch')
    plt.legend(['train', 'Validation'], loc='upper left')
    plt.savefig('../../result/model_loss' + str(fold) + '.png')
    plt.show()

    plt.plot(history.history['acc'][1:])
    plt.plot(history.history['val_acc'][1:])
    plt.title('model Accuracy')
    plt.ylabel('val_acc')
    plt.xlabel('epoch')
    plt.legend(['train', 'Validation'], loc='upper left')
    plt.savefig('../../result/model_accuracy' + str(fold) + '.png')
    plt.show()




folds = KFold(n_splits=5, shuffle=True, random_state=2019)
NN_predictions = np.zeros((test_X.shape[0], classes))
oof_preds = np.zeros((train_X.shape[0], classes))

patience = 50   ## How many steps to stop
call_ES = keras.callbacks.EarlyStopping(monitor='val_loss', min_delta=0, patience=patience, verbose=1,
                                        mode='auto', baseline=None)

for fold_, (trn_, val_) in enumerate(folds.split(train_x)):
    print("fold {}".format(fold_ + 1))
    x_train, y_train = train_X[trn_], y_categorical[trn_]
    x_valid, y_valid = train_X[val_], y_categorical[val_]


    model = MLP(dropout_rate=0.5, activation='relu')
    model.compile(optimizer='adam', loss='categorical_crossentropy',  metrics=['accuracy'])
    history = model.fit(x_train, y_train,
                        validation_data=[x_valid, y_valid],
                        epochs=epochs,
                        batch_size=batch_size,
                        callbacks=[call_ES, ],
                        shuffle=True,
                        verbose=1)

    oof_preds[val_] = model.predict_proba(x_valid, batch_size=batch_size)
    NN_predictions += model.predict_proba(test_X, batch_size=batch_size) / folds.n_splits

result = np.argmax(NN_predictions, axis=1)

  

日积月累,与君共进,增增小结,未完待续。

你可能感兴趣的:(机器学习与应用实战篇,python,人工智能,机器学习,算法,分类)