随机森林对特征重要性排序

two methods: 
1.Mean decrease impurity 不纯度降低

大概是对于每颗树,按照impurity(gini /entropy /information gain)给特征排序,然后整个森林取平均。最优条件的选择依据是不纯度。不纯度在分类中通常为Gini不纯度或信息增益/信息熵,对于回归问题来说是方差。 

基于不纯度对模型进行排序有几点需要注意: 
(1)基于不纯度降低的特征选择将会偏向于选择那些具有较多类别的变量(bias)。 
(2)当存在相关特征时,一个特征被选择后,与其相关的其他特征的重要度则会变得很低,因为他们可以减少的不纯度已经被前面的特征移除了。

sklearn实现如下:

from sklearn.datasets import load_boston
from sklearn.ensemble import RandomForestRegressor
import numpy as np
#Load boston housing dataset as an example
boston = load_boston()
X = boston["data"]
print type(X),X.shape
Y = boston["target"]
names = boston["feature_names"]
print names
rf = RandomForestRegressor()
rf.fit(X, Y)
print "Features sorted by their score:"
print sorted(zip(map(lambda x: round(x, 4), rf.feature_importances_), names), reverse=True)

结果如下:

Features sorted by their score:
[(0.5104, 'RM'), (0.2837, 'LSTAT'), (0.0812, 'DIS'), (0.0303, 'CRIM'), (0.0294, 'NOX'), (0.0176, 'PTRATIO'), (0.0134, 'AGE'), (0.0115, 'B'), (0.0089, 'TAX'), (0.0077, 'INDUS'), (0.0051, 'RAD'), (0.0006, 'ZN'), (0.0004, 'CHAS')]

2.Mean decrease accuracy 准确率降低

这种方法是直接测量每种特征对模型预测准确率的影响,基本思想是重新排列某一列特征值的顺序,观测降低了多少模型的准确率。对于不重要的特征,这种方法对模型准确率的影响很小,但是对于重要特征却会极大降低模型的准确率。 大概就是measure一下对每个特征加躁,看对结果的准确率的影响。影响小说明这个特征不重要,反之重要 

具体步骤如下: 
在随机森林中某个特征X的重要性的计算方法如下: 
a:对于随机森林中的每一颗决策树,使用相应的OOB(袋外数据)数据来计算它的袋外数据误差,记为errOOB1. 
b: 随机地对袋外数据OOB所有样本的特征X加入噪声干扰(就可以随机的改变样本在特征X处的值),再次计算它的袋外数据误差,记为errOOB2. 

c:假设随机森林中有Ntree棵树,那么对于特征X的重要性=∑(errOOB2-errOOB1)/Ntree,之所以可以用这个表达式来作为相应特征的重要性的度量值是因为:若给某个特征随机加入噪声之后,袋外的准确率大幅度降低,则说明这个特征对于样本的分类结果影响很大,也就是说它的重要程度比较高。

示例如下

from sklearn.cross_validation import ShuffleSplit
from sklearn.metrics import r2_score
from collections import defaultdict

X = boston["data"]
Y = boston["target"]

rf = RandomForestRegressor()
scores = defaultdict(list)

#crossvalidate the scores on a number of different random splits of the data
for train_idx, test_idx in ShuffleSplit(len(X), 100, .3):
    X_train, X_test = X[train_idx], X[test_idx]
    Y_train, Y_test = Y[train_idx], Y[test_idx]
    r = rf.fit(X_train, Y_train)
    acc = r2_score(Y_test, rf.predict(X_test))
    for i in range(X.shape[1]):
        X_t = X_test.copy()
        np.random.shuffle(X_t[:, i])
        shuff_acc = r2_score(Y_test, rf.predict(X_t))
        scores[names[i]].append((acc-shuff_acc)/acc)
print "Features sorted by their score:"
print sorted([(round(np.mean(score), 4), feat) for
              feat, score in scores.items()], reverse=True)

输出结果:

Features sorted by their score:
[(0.7276, 'LSTAT'), (0.5675, 'RM'), (0.0867, 'DIS'), (0.0407, 'NOX'), (0.0351, 'CRIM'), (0.0233, 'PTRATIO'), (0.0168, 'TAX'), (0.0122, 'AGE'), (0.005, 'B'), (0.0048, 'INDUS'), (0.0043, 'RAD'), (0.0004, 'ZN'), (0.0001, 'CHAS')]

在这个例子中, LSTAT 和RM是两个对模型影响较大的特征,打乱他们的顺序将会降低模型约73%与57%的准确率。


参考:http://blog.datadive.net/selecting-good-features-part-iii-random-forests/


你可能感兴趣的:(机器学习)