本文主要对对应文档的内容进行简化(以代码示例为主)及汉化
对应文档位置:http://scikit-learn.org/stable/modules/feature_selection.html#feature-selection
1.13. Feature selection
feature selection 作用: 增加分类器的score ,提升分类器在高纬数据集上的表现
from sklearn.feature_selection import VarianceThreshold X = [[0, 0, 1], [0, 1, 0], [1, 0, 0], [0, 1, 1], [0, 1, 0], [0, 1, 1]] sel = VarianceThreshold(threshold=(.8 * (1 - .8))) sel.fit_transform(X) array([[0, 1], [1, 0], [0, 0], [1, 1], [1, 0], [1, 1]])说明:
VarianceThreshold
默认值:去除差异值为0(或者为相同值的变量)
VarianceThreshold(threshold=(.8 * (1 - .8))) ,例子中假设为bool型变量(取值为0,1),其参数threshold的值为方差值; 对于伯努利分布,其方差为p(1-p)=0.8*(1-0.8)
from sklearn.datasets import load_iris from sklearn.feature_selection import SelectKBest from sklearn.feature_selection import chi2 iris = load_iris() X, y = iris.data, iris.target X.shape (150, 4) X_new = SelectKBest(chi2, k=2).fit_transform(X, y) X_new.shape (150, 2)
classsklearn.feature_selection.
SelectKBest
(score_func=, k=10)
返回值:scores_ : array-like, shape=(n_features,)Scores of features.
pvalues_ : array-like, shape=(n_features,)p-values of feature scores.
f_regression (用于回归)
chi2
or f_classif (用于分类)
RFECV
performs RFE in a cross-validation loop to find the optimal number of features.
from sklearn.svm import SVC from sklearn.datasets import load_digits from sklearn.feature_selection import RFE import matplotlib.pyplot as plt digits = load_digits() X = digits.images.reshape((len(digits.images), -1)) y = digits.target svc = SVC(kernel="linear", C=1) rfe = RFE(estimator=svc, n_features_to_select=1, step=1) #进行递归消除特征 rfe.fit(X, y) ranking = rfe.ranking_.reshape(digits.images[0].shape)
from sklearn.svm import SVC from sklearn.cross_validation import StratifiedKFold from sklearn.feature_selection import RFECV from sklearn.datasets import make_classification X, y = make_classification(n_samples=1000, n_features=25, n_informative=3, n_redundant=2, n_repeated=0, n_classes=8, n_clusters_per_class=1, random_state=0) svc = SVC(kernel="linear") rfecv = RFECV(estimator=svc, step=1, cv=StratifiedKFold(y, 2), scoring='accuracy') rfecv.fit(X, y) print("Optimal number of features : %d" % rfecv.n_features_)