spark机器学习笔记:(五)用Spark Python构建分类模型(下)

声明:版权所有,转载请联系作者并注明出处  http://blog.csdn.net/u013719780?viewmode=contents


博主简介:风雪夜归子(英文名:Allen),机器学习算法攻城狮,喜爱钻研Meachine Learning的黑科技,对Deep Learning和Artificial Intelligence充满兴趣,经常关注Kaggle数据挖掘竞赛平台,对数据、Machine Learning和Artificial Intelligence有兴趣的童鞋可以一起探讨哦,个人CSDN博客:http://blog.csdn.net/u013719780?viewmode=contents




6 改进模型性能以及参数调优 



到底哪里出错了呢?为什么我们的模型如此复杂却只得到比随机稍好的结果?我们的模型 到底哪里存在问题?想想看,我们只是简单地把原始数据送进了模型做训练。事实上,我们并没有把所有数据用在模型中,只是用了其中易用的数值部分。同时,我们也没有对这些数值特征做太多分析。 


6.1 特征标准化 



我们使用的许多模型对输入数据的分布和规模有着一些固有的假设,其中最常见的假设形式是特征满足正态分布。下面让我们进一步研究特征是如何分布的


 
                   
from pyspark.mllib.stat import MultivariateStatisticalSummary
from pyspark.mllib.linalg.distributed import RowMatrix

vectors = data.map(lambda point : point.features)
matrix = MultivariateStatisticalSummary(vectors)
matrix_mean = matrix.mean()
matrix_min = matrix.min()
print '数据集每列的均值:',matrix_mean
print '数据集每列的最小值:',matrix_min
print '数据集每列的最大值:',matrix.max()
print '数据集的样本数:',matrix.count()
print '数据集每列的方差:',matrix.variance()

输出结果:
数据集每列的均值: [  4.12258053e-01   2.76182319e+00   4.68230473e-01   2.14079926e-01
   9.20623607e-02   4.92621604e-02   2.25510345e+00  -1.03750428e-01
   0.00000000e+00   5.64227450e-02   2.12305612e-02   2.33778177e-01
   2.75709037e-01   6.15551048e-01   6.60311021e-01   3.00770791e+01
   3.97565923e-02   5.71659824e+03   1.78754564e+02   4.96064909e+00
   1.72864050e-01   1.01220792e-01]
数据集每列的最小值: [  0.00000000e+00   1.19000000e+02   7.45454545e-01   5.81818182e-01
   2.90909091e-01   1.81818180e-02   4.34639175e-01   0.00000000e+00
   0.00000000e+00   1.98412700e-02   0.00000000e+00   2.98299595e-01
   3.86363640e-02   0.00000000e+00   0.00000000e+00   1.20000000e+01
   0.00000000e+00   4.36800000e+03   5.50000000e+01   3.00000000e+00
   5.45454550e-02   8.73563220e-02]
数据集每列的最大值: [  0.00000000e+00   2.49000000e+00   6.43564356e-01   4.35643564e-01
   2.87128713e-01   2.47524752e-01   4.46257810e-01   0.00000000e+00
   0.00000000e+00   1.84162060e-02   0.00000000e+00   2.69517709e-01
   1.86592950e-02   1.00000000e+00   0.00000000e+00   9.00000000e+00
   0.00000000e+00   1.06640000e+04   1.01000000e+02   5.00000000e+00
   6.93069310e-02   8.47826090e-02]
数据集的样本数: 7395
数据集每列的方差: [  1.09727602e-01   7.42907773e+01   4.12575900e-02   2.15305244e-02
   9.21057177e-03   5.27422016e-03   3.25347870e+01   9.39571798e-02
   0.00000000e+00   1.71750875e-03   2.07798245e-02   2.75446690e-03
   3.68329077e+00   2.36647955e-01   2.24300377e-01   4.15822321e+02
   3.81760057e-02   7.87626486e+07   3.22037609e+04   1.04515955e+01
   3.35890913e-02   6.27668400e-03]


computeColumnSummaryStatistics方法计算特征矩阵每列的不同统计数据,包括均值和方差,所有统计值按每列一项的方式存储在一个Vector中(在我们的例子中每个特征对应一项)。

观察前面对均值和方差的输出,可以清晰发现第二个特征的方差和均值比其他的都要高(你会发现一些其他特征也有类似的结果,而且有些特征更加极端)。因为我们的数据在原始形式下,确切地说并不符合标准的高斯分布。为使数据更符合模型的假设,可以对每个特征进行标准化,使得每个特征是0均值和单位标准差。具体做法是对每个特征值减去列的均值,然后除以列的标准差以进行缩放:                  

                                                                                                            (xμ) / sqrt(variance) 

使用SparkStandardScaler中的方法方便地完成这个任务。 首先,传入两个参数,一个表示是否从数据中减去均值,另一个表示是否应用标准差缩放。这样使得StandardScaler和我们的输入向量相符。最后,将输入向量传到转换函数,并且返回归一化的向量。具体实现代码如下,我们使用map函数来保留数据集的标签: 


from pyspark.mllib.feature import StandardScalerModel,StandardScaler

scaler = StandardScaler(withMean=True, withStd=True).fit(vectors)
labels = data.map(lambda point: point.label)
features = data.map(lambda point: point.features)
print '原始数据:',features.take(3)
scaled_data = labels.zip(scaler.transform(features))
scaled_data = scaled_data.map(lambda (x,y): LabeledPoint(x,y))
print '规范化之后的数据:',scaled_data.map(lambda point: point.features).take(3)



输出结果:
原始数据: [DenseVector([0.7891, 2.0556, 0.6765, 0.2059, 0.0471, 0.0235, 0.4438, 0.0, 0.0, 0.0908, 0.0, 0.2458, 0.0039, 1.0, 1.0, 24.0, 0.0, 5424.0, 170.0, 8.0, 0.1529, 0.0791]), DenseVector([0.5741, 3.678, 0.508, 0.2888, 0.2139, 0.1444, 0.4686, 0.0, 0.0, 0.0987, 0.0, 0.2035, 0.0887, 1.0, 1.0, 40.0, 0.0, 4973.0, 187.0, 9.0, 0.1818, 0.1254]), DenseVector([0.9965, 2.3829, 0.562, 0.3217, 0.1202, 0.0426, 0.5254, 0.0, 0.0, 0.0724, 0.0, 0.2264, 0.1205, 1.0, 1.0, 55.0, 0.0, 2240.0, 258.0, 11.0, 0.1667, 0.0576])]
规范化之后的数据: [DenseVector([1.1376, -0.0819, 1.0251, -0.0559, -0.4689, -0.3543, -0.3175, 0.3385, 0.0, 0.8288, -0.1473, 0.2296, -0.1416, 0.7902, 0.7172, -0.298, -0.2035, -0.033, -0.0488, 0.9401, -0.1087, -0.2788]), DenseVector([0.4887, 0.1063, 0.1959, 0.509, 1.2695, 1.3097, -0.3132, 0.3385, 0.0, 1.0202, -0.1473, -0.5771, -0.0975, 0.7902, 0.7172, 0.4866, -0.2035, -0.0838, 0.0459, 1.2494, 0.0489, 0.3058]), DenseVector([1.7637, -0.044, 0.4617, 0.7334, 0.2927, -0.0912, -0.3032, 0.3385, 0.0, 0.3867, -0.1473, -0.1405, -0.0808, 0.7902, 0.7172, 1.2221, -0.2035, -0.3917, 0.4416, 1.868, -0.0338, -0.5504])]



现在我们使用标准化的数据重新训练模型。这里只训练逻辑回归(因为决策树和朴素贝叶斯不受特征标准化的影响),并说明特征标准化的影响: 


 
                   
lrModel_scaled = LogisticRegressionWithSGD.train(scaled_data,numIteration)
lrPredictionLabel = lrModel_scaled.predict(scaled_data.map(lambda point: point.features))
trueLabel = scaled_data.map(lambda point: point.label)
lrTotalCorrect_scaled = sum([1.0 if prediction==label else 0.0 \
                             for prediction, label in zip(lrPredictionLabel.collect(),trueLabel.collect())])\
                        /scaled_data.count()
print '特征规范化之后逻辑回归预测的正确率:',lrTotalCorrect_scaled

输出结果:
特征规范化之后逻辑回归预测的正确率: 0.620960108181

 
                   
all_models_metrics =[]
for model in [lrModel_scaled]:
    scoresAndLabels = scaled_data.map(lambda point:(model.predict(point.features),point.label)).collect()
    scoresAndLabels = [(float(i),j) for (i,j) in scoresAndLabels]
    scoresAndLabels_rdd = sc.parallelize(scoresAndLabels)
    metrics = BinaryClassificationMetrics(scoresAndLabels_rdd)
    all_models_metrics.append((model.__class__.__name__,metrics.areaUnderROC, metrics.areaUnderPR))
for model_name, AUC, PR in all_models_metrics:
    print '%s的AUC是%f,PR是%f'%(model_name, AUC, PR)

输出结果:
LogisticRegressionModel的AUC是0.620190,PR是0.727701

从结果可以看出,通过简单对特征标准化,就提高了逻辑回归的准确率,并将AUC从随机51%提升到62%。 



6.2 其他特征 



我们已经看到,需要注意对特征进行标准和归一化,这对模型性能可能有重要影响。在这个示例中,我们仅仅使用了部分特征,却完全忽略了类别(category)变量和样板(boilerplate)列的文本内容。

这样做是为了便于介绍。现在我们再来评估一下添加其他特征,比如类别特征对性能的影响。首先,我们查看所有类别,并对每个类别做一个索引的映射,这里索引可以用于类别特征做1-of-k编码。 

 
                   
category_dict = {}
categories = records.map(lambda x: x[3]).distinct().zipWithIndex().collect()
for  (x,y) in [(key.replace('\"','') ,val) for (key, val) in categories]:
    category_dict[x] = y

print '类别的编码:',category_dict

num_categories = len(category_dict)
print '类别数:',num_categories

输出结果:
类别的编码: {u'gaming': 7, u'recreation': 0, u'business': 3, u'computer_internet': 1, u'unknown': 9, u'culture_politics': 6, u'science_technology': 12, u'law_crime': 5, u'sports': 8, u'religion': 10, u'weather': 13, u'health': 4, u'?': 2, u'arts_entertainment': 11}
类别数: 14




otherdata = trimmed.map(lambda x : (x[-1],x[4:-1])).\
                    map(lambda (x,y):(x.replace('\"',''),[0.0 if yy=='\"?\"' else yy.replace('\"','') for yy in y])).\
                    map(lambda (x,y):(x.replace("\"",""),[0.0 if yy =='?' else yy.replace("\"","") for yy in y])).\
                    map(lambda (x,y):(int(x), [float(yy) for yy in y])).\
                    map(lambda (x,y):LabeledPoint(x,Vectors.dense(y)))
otherdata.take(5)


输出结果:
[LabeledPoint(0.0, [0.789131,2.055555556,0.676470588,0.205882353,0.047058824,0.023529412,0.443783175,0.0,0.0,0.09077381,0.0,0.245831182,0.003883495,1.0,1.0,24.0,0.0,5424.0,170.0,8.0,0.152941176,0.079129575]),
 LabeledPoint(1.0, [0.574147,3.677966102,0.50802139,0.288770053,0.213903743,0.144385027,0.468648998,0.0,0.0,0.098707403,0.0,0.203489628,0.088652482,1.0,1.0,40.0,0.0,4973.0,187.0,9.0,0.181818182,0.125448029]),
 LabeledPoint(1.0, [0.996526,2.382882883,0.562015504,0.321705426,0.120155039,0.042635659,0.525448029,0.0,0.0,0.072447859,0.0,0.22640177,0.120535714,1.0,1.0,55.0,0.0,2240.0,258.0,11.0,0.166666667,0.057613169]),
 LabeledPoint(1.0, [0.801248,1.543103448,0.4,0.1,0.016666667,0.0,0.480724749,0.0,0.0,0.095860566,0.0,0.265655744,0.035343035,1.0,0.0,24.0,0.0,2737.0,120.0,5.0,0.041666667,0.100858369]),
 LabeledPoint(0.0, [0.719157,2.676470588,0.5,0.222222222,0.12345679,0.043209877,0.446143274,0.0,0.0,0.024908425,0.0,0.228887247,0.050473186,1.0,1.0,14.0,0.0,12032.0,162.0,10.0,0.098765432,0.082568807])]





因此,我们需要创建一个长为14的向量来表示类别特征,然后根据每个样本所属类别索引,对相应的维度赋值为1,其他为0。我们假定这个新的特征向量和其他的数值特征向量一样: 


 
                   
#将类别特征和数值特征合并
def build_feature(x):
    import numpy as np
    numeric_feature = [0.0 if yy=='?' else yy for yy in [y.replace('\"','') for y in x[4:-1]]]  #数值特征
    category_feature = np.zeros(num_categories)              #数值特征
    category_index = category_dict[x[3].replace('\"','')]    
    category_feature[category_index] = 1                     #类别特征
    label = x[-1].replace('\"','')                           #样本标签
    feature = LabeledPoint(label, Vectors.dense(list(category_feature)+numeric_feature))   #合并类别特征和数值特征
    return feature

category_data = trimmed.map(lambda x:build_feature(x))
category_data.take(5)

输出结果:
[LabeledPoint(0.0, [0.0,0.0,0.0,1.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.789131,2.055555556,0.676470588,0.205882353,0.047058824,0.023529412,0.443783175,0.0,0.0,0.09077381,0.0,0.245831182,0.003883495,1.0,1.0,24.0,0.0,5424.0,170.0,8.0,0.152941176,0.079129575]),
 LabeledPoint(1.0, [1.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.574147,3.677966102,0.50802139,0.288770053,0.213903743,0.144385027,0.468648998,0.0,0.0,0.098707403,0.0,0.203489628,0.088652482,1.0,1.0,40.0,0.0,4973.0,187.0,9.0,0.181818182,0.125448029]),
 LabeledPoint(1.0, [0.0,0.0,0.0,0.0,1.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.996526,2.382882883,0.562015504,0.321705426,0.120155039,0.042635659,0.525448029,0.0,0.0,0.072447859,0.0,0.22640177,0.120535714,1.0,1.0,55.0,0.0,2240.0,258.0,11.0,0.166666667,0.057613169]),
 LabeledPoint(1.0, [0.0,0.0,0.0,0.0,1.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.801248,1.543103448,0.4,0.1,0.016666667,0.0,0.480724749,0.0,0.0,0.095860566,0.0,0.265655744,0.035343035,1.0,0.0,24.0,0.0,2737.0,120.0,5.0,0.041666667,0.100858369]),
 LabeledPoint(0.0, [0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,1.0,0.0,0.0,0.0,0.0,0.0,0.719157,2.676470588,0.5,0.222222222,0.12345679,0.043209877,0.446143274,0.0,0.0,0.024908425,0.0,0.228887247,0.050473186,1.0,1.0,14.0,0.0,12032.0,162.0,10.0,0.098765432,0.082568807])]

其中第一部分是一个14维的向量,向量中类别对应索引的那一维为1。 


同样,因为我们的原始数据没有标准化,所以在训练这个扩展数据集之前,应该使用同样的StandardScaler方法对其进行标准化转换: 

 
                   
category_labels = category_data.map(lambda point: point.label)
category_features = category_data.map(lambda point: point.features)
scaler2 = StandardScaler(withMean=True, withStd=True).fit(category_features)
print '规范化之前的数据集特征:',category_features.take(5)
scaled_category_data = category_labels.zip(scaler2.transform(category_features))
scaled_category_data = scaled_category_data.map(lambda (x,y): LabeledPoint(x,y))
print '规范化之后的数据集特征:',scaled_category_data.map(lambda point: point.features).take(5)

输出结果:

规范化之前的数据集特征: [DenseVector([0.0, 0.0, 0.0, 1.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.7891, 2.0556, 0.6765, 0.2059, 0.0471, 0.0235, 0.4438, 0.0, 0.0, 0.0908, 0.0, 0.2458, 0.0039, 1.0, 1.0, 24.0, 0.0, 5424.0, 170.0, 8.0, 0.1529, 0.0791]), DenseVector([1.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.5741, 3.678, 0.508, 0.2888, 0.2139, 0.1444, 0.4686, 0.0, 0.0, 0.0987, 0.0, 0.2035, 0.0887, 1.0, 1.0, 40.0, 0.0, 4973.0, 187.0, 9.0, 0.1818, 0.1254]), DenseVector([0.0, 0.0, 0.0, 0.0, 1.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.9965, 2.3829, 0.562, 0.3217, 0.1202, 0.0426, 0.5254, 0.0, 0.0, 0.0724, 0.0, 0.2264, 0.1205, 1.0, 1.0, 55.0, 0.0, 2240.0, 258.0, 11.0, 0.1667, 0.0576]), DenseVector([0.0, 0.0, 0.0, 0.0, 1.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.8012, 1.5431, 0.4, 0.1, 0.0167, 0.0, 0.4807, 0.0, 0.0, 0.0959, 0.0, 0.2657, 0.0353, 1.0, 0.0, 24.0, 0.0, 2737.0, 120.0, 5.0, 0.0417, 0.1009]), DenseVector([0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 1.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.7192, 2.6765, 0.5, 0.2222, 0.1235, 0.0432, 0.4461, 0.0, 0.0, 0.0249, 0.0, 0.2289, 0.0505, 1.0, 1.0, 14.0, 0.0, 12032.0, 162.0, 10.0, 0.0988, 0.0826])]
规范化之后的数据集特征: [DenseVector([-0.4464, -0.2042, -0.6808, 2.7207, -0.271, -0.0649, -0.2205, -0.1019, -0.2327, -0.0285, -0.0991, -0.3818, -0.2017, -0.0233, 1.1376, -0.0819, 1.0251, -0.0559, -0.4689, -0.3543, -0.3175, 0.3385, 0.0, 0.8288, -0.1473, 0.2296, -0.1416, 0.7902, 0.7172, -0.298, -0.2035, -0.033, -0.0488, 0.9401, -0.1087, -0.2788]), DenseVector([2.2397, -0.2042, -0.6808, -0.3675, -0.271, -0.0649, -0.2205, -0.1019, -0.2327, -0.0285, -0.0991, -0.3818, -0.2017, -0.0233, 0.4887, 0.1063, 0.1959, 0.509, 1.2695, 1.3097, -0.3132, 0.3385, 0.0, 1.0202, -0.1473, -0.5771, -0.0975, 0.7902, 0.7172, 0.4866, -0.2035, -0.0838, 0.0459, 1.2494, 0.0489, 0.3058]), DenseVector([-0.4464, -0.2042, -0.6808, -0.3675, 3.6896, -0.0649, -0.2205, -0.1019, -0.2327, -0.0285, -0.0991, -0.3818, -0.2017, -0.0233, 1.7637, -0.044, 0.4617, 0.7334, 0.2927, -0.0912, -0.3032, 0.3385, 0.0, 0.3867, -0.1473, -0.1405, -0.0808, 0.7902, 0.7172, 1.2221, -0.2035, -0.3917, 0.4416, 1.868, -0.0338, -0.5504]), DenseVector([-0.4464, -0.2042, -0.6808, -0.3675, 3.6896, -0.0649, -0.2205, -0.1019, -0.2327, -0.0285, -0.0991, -0.3818, -0.2017, -0.0233, 1.1742, -0.1414, -0.3359, -0.7774, -0.7856, -0.6783, -0.3111, 0.3385, 0.0, 0.9516, -0.1473, 0.6073, -0.1252, 0.7902, -1.3941, -0.298, -0.2035, -0.3357, -0.3274, 0.0122, -0.7158, -0.0046]), DenseVector([-0.4464, -0.2042, -0.6808, -0.3675, -0.271, -0.0649, -0.2205, -0.1019, 4.2963, -0.0285, -0.0991, -0.3818, -0.2017, -0.0233, 0.9264, -0.0099, 0.1564, 0.0555, 0.3271, -0.0833, -0.3171, 0.3385, 0.0, -0.7604, -0.1473, -0.0932, -0.1174, 0.7902, 0.7172, -0.7884, -0.2035, 0.7116, -0.0934, 1.5587, -0.4043, -0.2354])]


现在,我们再用扩展的特征来训练逻辑回归模型,然后对该模型的性能进行评估:

 
                   
#计算模型准确率
lrModel_category_scaled = LogisticRegressionWithSGD.train(scaled_category_data, numIteration)
lr_totalCorrect_category_scaled = scaled_category_data.\
                   map(lambda point : 1 if(lrModel_category_scaled.predict(point.features)==point.label) else 0).sum()
lr_accuracy_category_scaled = lr_totalCorrect_category_scaled/(1.0*data.count())
print '逻辑回归模型的准确率 : %f'%lr_accuracy_category_scaled

输出结果:
逻辑回归模型的准确率 : 0.665720
 
           

 
                     
#计算模型的AUC和PR
all_models_metrics =[]
for model in [lrModel_category_scaled]:
    scoresAndLabels = scaled_category_data.map(lambda point:(model.predict(point.features),point.label)).collect()
    scoresAndLabels = [(float(i),j) for (i,j) in scoresAndLabels]
    scoresAndLabels_rdd = sc.parallelize(scoresAndLabels)
    metrics = BinaryClassificationMetrics(scoresAndLabels_rdd)
    all_models_metrics.append((model.__class__.__name__,metrics.areaUnderROC, metrics.areaUnderPR))

for model_name, AUC, PR in all_models_metrics:
    print '%s的AUC是%f,PR是%f'%(model_name, AUC, PR)

输出结果:
LogisticRegressionModel的AUC是0.665475,PR是0.757982

从上述结果可知,通过对数据的特征标准化,模型准确率得到提升,将AUC51%提高到62%。之后,通过添加类别特征,模型性能进一步提升到66.57%(其中新添加的特征也做了标准化操作)。 




6.3 使用正确的数据格式 



模型性能的另外一个关键部分是对每个模型使用正确的数据格式。前面对数值向量应用朴素贝叶斯模型得到了非常差的结果,这是模型本身的缺陷还是因为其他原因造成的呢?在这里,我们知道MLlib实现了多项式模型,并且该模型可以处理计数形式的数据。这包括二元表示的类型特征(比如前面提到的1-of-k表示)或者频率数据(比如一个文档中单词出现的频率)。我开始时使用的数值特征并不符合假定的输入分布,所以模型性能不好也并不是意料之外。 

为了更好的说明问题,本文仅仅使用类型特征,而1-of-k编码的类型特征更符合朴素贝叶斯模型,我们用如下代码构建数据集: 

 
                   
category_nbdata = scaled_category_data
category_nbdata_nonegative = category_nbdata.map(lambda point: (point.label,point.features)).\
                                             map(lambda (x,y): (x,[0.0 if yy<0.0 else yy for yy in y])).\
                                             map(lambda (x,y):LabeledPoint(x,Vectors.dense(y)))


接下来对模型进行评价:

 
                   
#计算准确率
nb_category_model = NaiveBayes.train(category_nbdata_nonegative)
nb_category_total_correct = category_nbdata_nonegative.\
                         map(lambda point:1 if (nb_category_model.predict(point.features)==point.label) else 0).sum()

nb_category_accuracy = nb_category_total_correct/(1.0*category_nbdata_nonegative.count())
print '朴素贝叶斯模型的准确率:%f'%nb_category_accuracy

输出结果:
朴素贝叶斯模型的准确率:0.652738

 
                   
all_models_metrics =[]
for model in [nb_category_model]:
    scoresAndLabels = category_nbdata_nonegative.map(lambda point:(model.predict(point.features),point.label)).collect()
    scoresAndLabels = [(float(i),j) for (i,j) in scoresAndLabels]
    scoresAndLabels_rdd = sc.parallelize(scoresAndLabels)
    metrics = BinaryClassificationMetrics(scoresAndLabels_rdd)
    all_models_metrics.append((model.__class__.__name__,metrics.areaUnderROC, metrics.areaUnderPR))

for model_name, AUC, PR in all_models_metrics:
    print '%s的AUC是%f,PR是%f'%(model_name, AUC, PR)

输出结果:
NaiveBayesModel的AUC是0.651468,PR是0.752038


从上述结果可知,使用格式正确的输入数据后 ,朴素贝叶斯的准确率从58%提高到了65.27%。 




6.4 模型参数调优 


6.4.1 逻辑回归


前几节展示了模型性能的影响因素:特征提取、特征选择、输入数据的格式和模型对数据分布的假设。但是到目前为止,我们对模型参数的讨论只是一笔带过,而实际上它对于模型性能影响很大。

MLlib默认的train方法对每个模型的参数都使用默认值。接下来让我们深入了解一下这些参数。 下面就演示一下在Spark Python中我们如何进行这部分的工作。

 
                   
def train_with_params(input_data, model, reg_param, num_iter, step_size):
    model = model.train(input_data, iterations=num_iter, regParam=reg_param, step=step_size)
    return model
def create_metrics(tag, input_data, model):
    scoresAndLabels_rdd = input_data.map(lambda x: (model.predict(x.features)*1.0,x.label*1.0))
    metrics = BinaryClassificationMetrics(scoresAndLabels_rdd)
    return tag, metrics.areaUnderROC, metrics.areaUnderPR



对参数迭代次数进行调优:

 
                   
#对参数迭代次数进行调优
for iteration_num in [1,5,10,50]:
    model = train_with_params(scaled_category_data, LogisticRegressionWithSGD, 0.0, iteration_num, 1.0)
    label, ROC, PR = create_metrics('%d iterations'%iteration_num,scaled_category_data,model)
    print '%s,ROC = %f,PR=%f'%(label,ROC,PR)


输出结果:
1 iterations,ROC = 0.649520,PR=0.745886
5 iterations,ROC = 0.666161,PR=0.758022
10 iterations,ROC = 0.665483,PR=0.757964
50 iterations,ROC = 0.668143,PR=0.760930


从上述结果可知,迭代次数达到一定值之后,再增加迭代次数对结果影响已经较小了。



对参数步长进行调优:

SGD中,在训练每个样本并更新模型的权重向量时,步长用来控制算法在最陡的梯度方向上应该前进多远。较大的步长收敛较快,但是步长太大可能导致收敛到局部最优解。下面计算不同步长的影响:


 
                   
#对参数步长次数进行调优
for step_size in [0.001, 0.01, 0.1, 1.0, 10.0]:
    model = train_with_params(scaled_category_data, LogisticRegressionWithSGD, 0.0, 10, step_size)
    label, ROC, PR = create_metrics('%2.3f 的步长'%step_size,scaled_category_data,model)
    print '%s,ROC = %f,PR=%f'%(label,ROC,PR)

输出结果:

0.001 的步长,ROC = 0.649659,PR=0.745974
0.010 的步长,ROC = 0.649644,PR=0.746017
0.100 的步长,ROC = 0.655211,PR=0.750008
1.000 的步长,ROC = 0.665483,PR=0.757964
10.000 的步长,ROC = 0.619228,PR=0.725713
 
           
从上述结果可以看出,步长增长过大对模型性能有负面影响。
 
          

对参数正则化系数进行调优:

正则化的具体做法是在损失函数中添加一项关于模型权重向量的函数,从而会使损失增加。正则化在现实中几乎是必须的,当特征维度高于训练样本时(此时变量相关需要学习的权重数量也非常大)尤其重要。

当正则化不存在或者非常低时,模型容易过拟合。而且大多数模型在没有正则化的情况会在训练数据上过拟合。过拟合也是交叉验证技术使用的关键原因,交叉验证会在后面详细介绍。

相反,虽然正则化可以得到一个简单模型,但正则化太高可能导致模型欠拟合,从而使模型性能变得很糟糕。 

MLlib中可用的正则化形式有如下几个。

SimpleUpdater:相当于没有正则化,是逻辑回归的默认配置。SquaredL2Updater:这个正则项基于权重向量的L2正则化,是SVM模型的默认值。

L1Updater:这个正则项基于权重向量的L1正则化,会导致得到一个稀疏的权重向量(不重要的权重的值接近0)。 

 
                   
#对参数正则化系数进行调优
for reg_param in [0.001, 0.01, 0.1, 1.0, 10.0]:
    model = train_with_params(scaled_category_data, LogisticRegressionWithSGD, reg_param, 10, 0.01)
    label, ROC, PR = create_metrics('正则化系数%2.3f '%reg_param,scaled_category_data,model)
    print '%s,ROC = %f,PR=%f'%(label,ROC,PR)


输出结果:
正则化系数0.001 ,ROC = 0.649644,PR=0.746017
正则化系数0.010 ,ROC = 0.649644,PR=0.746017
正则化系数0.100 ,ROC = 0.649644,PR=0.746017
正则化系数1.000 ,ROC = 0.649644,PR=0.746017
正则化系数10.000 ,ROC = 0.648979,PR=0.745492





6.4.2 决策树


决策树模型在一开始使用原始数据做训练时获得了最好的性能。当时设置了参数maxDepth用来控制决策树的最大深度,进而控制模型的复杂度。而树的深度越大,得到的模型越复杂,但有能力更好地拟合数据。对于分类问题,我们需要为决策树模型选择以下两种不纯度度量方式:Gini或者Entropy。 


对树的深度和不纯度调优 

 
                   
def train_with_params_dt(input_data, impurity, maxTreeDepth):
    dt_model = DecisionTree.trainClassifier(input_data,numClass,{},impurity, maxDepth=maxTreeDepth)
    return dt_model
def create_metrics_dt(tag, input_data, model):
    predictLabel= model.predict(data.map(lambda point: point.features)).collect()
    trueLabel = input_data.map(lambda point: point.label).collect()
    scoresAndLabels = [(predictLabel[i],true_val) for i, true_val in enumerate(trueLabel)]
    scoresAndLabels_rdd = sc.parallelize(scoresAndLabels)
    scoresAndLabels_rdd = scoresAndLabels_rdd.map(lambda (x,y): (float(x),float(y)))
    dt_metrics = BinaryClassificationMetrics(scoresAndLabels_rdd)
    return tag,dt_metrics.areaUnderROC,dt_metrics.areaUnderPR


 
                   
for depth in [1,2,3,4,5,10,20]:
    for im in ['entropy','gini']:
        model=train_with_params_dt(data,im,depth)
        tag, ROC, PR = create_metrics_dt('impurity: %s, %d maxTreeDepth:'%(im,depth),data,model)
        print '%s, AUC=%f, PR=%f'%(tag,ROC,PR)


输出结果:
impurity: entropy, 1 maxTreeDepth:, AUC=0.593268, PR=0.749004
impurity: gini, 1 maxTreeDepth:, AUC=0.593268, PR=0.749004
impurity: entropy, 2 maxTreeDepth:, AUC=0.616839, PR=0.725164
impurity: gini, 2 maxTreeDepth:, AUC=0.616839, PR=0.725164
impurity: entropy, 3 maxTreeDepth:, AUC=0.626070, PR=0.751813
impurity: gini, 3 maxTreeDepth:, AUC=0.626070, PR=0.751813
impurity: entropy, 4 maxTreeDepth:, AUC=0.636333, PR=0.749393
impurity: gini, 4 maxTreeDepth:, AUC=0.636333, PR=0.749393
impurity: entropy, 5 maxTreeDepth:, AUC=0.648837, PR=0.743081
impurity: gini, 5 maxTreeDepth:, AUC=0.648916, PR=0.742894
impurity: entropy, 10 maxTreeDepth:, AUC=0.762552, PR=0.829623
impurity: gini, 10 maxTreeDepth:, AUC=0.783709, PR=0.843469
impurity: entropy, 20 maxTreeDepth:, AUC=0.984537, PR=0.988522
impurity: gini, 20 maxTreeDepth:, AUC=0.988707, PR=0.991328


从结果中可以看出,提高树的深度可以得到更精确的模型(这和预期一致,因为模型在更大的树深度下会变得更加复杂)。然而树的深度越大,模型对训练数据过拟合程度越严重。另外,两种不纯度方法对性能的影响差异不大。



6.4.3 朴素贝叶斯




最后,让我们看看lamda参数对朴素贝叶斯模型的影响。该参数可以控制相加式平滑(additive smoothing),解决数据中某个类别和某个特征值的组合没有同时出现的问题 。


 
                   
def train_with_params_nb(input_data, lambda_para):
    nb_model = NaiveBayes.train(input_data,lambda_para)
    return nb_model
def create_metrics_nb(tag, nbdata, model):
    scoresAndLabels = nbdata.map(lambda point:(float(model.predict(point.features)),point.label))
    nb_metrics = BinaryClassificationMetrics(scoresAndLabels)
    return tag,nb_metrics.areaUnderROC,nb_metrics.areaUnderPR


 
                   
for lambda_para in [0.001, 0.01, 0.1, 1.0, 10.0]:
    model=train_with_params_nb(nbdata,lambda_para)
    tag, ROC, PR = create_metrics_dt('lambda=%f' %lambda_para,nbdata,model)
    print '%s, AUC=%f, PR=%f'%(tag,ROC,PR)

输出结果:
lambda=0.001000, AUC=0.584085, PR=0.681374
lambda=0.010000, AUC=0.584085, PR=0.681374
lambda=0.100000, AUC=0.583954, PR=0.681243
lambda=1.000000, AUC=0.583683, PR=0.681003
lambda=10.000000, AUC=0.583537, PR=0.680914

从结果中可以看出lambda的值对性能没有影响,由此可见数据中某个特征和某个类别的组合不存在时不是问题。 




6.4.4 交叉验证




交叉验证是实际机器学习中的关键部分,同时在多模型选择和参数调优中占有很重的地位。

交叉验证的目的是测试模型在未知数据上的性能。不知道训练的模型在预测新数据时的性能,而直接放在实际数据(比如运行的系统)中进行评估是很危险的做法。正如前面提到的正则化实验中,我们的模型可能在训练数据中已经过拟合了,于是在未被训练的新数据中预测性能会很差。

交叉验证让我们使用一部分数据训练模型,将另外一部分用来评估模型性能。如果模型在训练以外的新数据中进行了测试,我们便可以由此估计模型对新数据的泛化能力。我们把数据划分为训练和测试数据,实现一个简单的交叉验证过程。我们将数据分为两个不重叠的数据集。第一个数据集用来训练,称为训练集。第二个数据集称为测试集或者保留集,用来评估模型在给定评测方法下的性能。实际中常用的划分方法包括:50/5060/4080/20等,只要训练模型的数据量不太小就行(通常,实际使用至少50%的数据用于训练)。 

在建模过程中,通常会创建三个数据集:训练集、评估集(类似上述测试集用于模型参数的调优,比如lambda和步长)和测试集(不用于模型的训练和参数调优,只用于估计模型在新数据中性能)。 

本文将数据集分成60%的训练集和40%的测试集(为了方便解释,我们在代码中使用一个固定的随机种子123来保证每次实验能得到相同的结果): 


 
                   
train_test_split = scaled_category_data.randomSplit([0.6,0.4],123)
train = train_test_split[0]
test = train_test_split[1]
for reg_para in [0.0, 0.001, 0.0025, 0.005, 0.01]:
    model = train_with_params(train, LogisticRegressionWithSGD, 0.0, 1.0, reg_para)
    label, roc, PR = create_metrics('%f regularization parameter'%reg_para,test,model)
    print '%s,AUC = %f,PR=%f'%(label,ROC,PR)


输出结果:
0.000000 regularization parameter,AUC = 0.584085,PR=0.757776
0.001000 regularization parameter,AUC = 0.584085,PR=0.747404
0.002500 regularization parameter,AUC = 0.584085,PR=0.747404
0.005000 regularization parameter,AUC = 0.584085,PR=0.747404
0.010000 regularization parameter,AUC = 0.584085,PR=0.747404



在交叉验证中,我们一般选择测试集中性能表现最好的参数设置(包括正则化以及步长等各种各样的参数)。然后用这些参数在所有的数据集上重新训练,最后用于新数据集的预测。 



7 小结 



本文介绍了Spark MLlib中提供的各种分类模型,讨论了如何在给定输入数据中训练模型,以及在标准评测指标下评估模型的性能。还讨论了如何用之前介绍的技术来处理特征以得到更好的性能。最后,我们讨论了正确的数据格式和数据分布、更多的训练数据、模型参数调优,以及交叉验证对模型能的影响。 

在本系列下一篇博文中,我们将使用类似的方法研究MLlib的回归模型 。











你可能感兴趣的:(spark机器学习笔记:(五)用Spark Python构建分类模型(下))