第二章 scikit-learn 统计学习中数据处理

2.1 统计学习 关于统计量和配置

2.11 数据集 

>>>fromsklearnimportdatasets

>>>iris=datasets.load_iris()

>>>data=iris.data
>>>data.shape 

显示数据量和特征维度

这个是二维度数组作为特征的情况。

>>>digits=datasets.load_digits()

>>>digits.images.shape
(1797, 8, 8)

2.1.2 估计对象

使用分类,回归或者其他方法对目标进行估计预测,同时要对特征进行处理

>>>estimator.fit(data) 

2.2 监督学习:从高纬度变量中l预测一个输出变量

2.2.1 最近邻与维度灾难 (花瓣类型的预测)

>>> import numpy as np
>>> from sklearn import datasets
>>> iris = datasets.load_iris()
>>> iris_X = iris.data
>>> iris_y = iris.target
>>> np.unique(iris_y)
array([0, 1, 2])

在与任何学习算法进行试验的过程中,重要的是不要测试的估计量的预测,而是用来适应评估新数据估计的性能。
KNN 样例
>>> # Split iris data in train and test data
>>> # A random permutation, to split the data randomly
>>> np.random.seed(0)
>>> indices = np.random.permutation(len(iris_X))
>>> iris_X_train = iris_X[indices[:-10]]
>>> iris_y_train = iris_y[indices[:-10]]
>>> iris_X_test = iris_X[indices[-10:]]
>>> iris_y_test = iris_y[indices[-10:]]
>>> # Create and fit a nearest-neighbor classifier
>>> from sklearn.neighbors import KNeighborsClassifier
>>> knn = KNeighborsClassifier()
>>> knn.fit(iris_X_train, iris_y_train)
输出KNN的参数KNeighborsClassifier(algorithm='auto', leaf_size=30, metric='minkowski',

metric_params=None, n_neighbors=5, p=2, weights='uniform')
>>> knn.predict(iris_X_test)
array([1, 2, 1, 0, 0, 0, 2, 1, 2, 0])
>>> iris_y_test
array([1, 1, 1, 0, 0, 0, 2, 1, 2, 0])

2.2.2 线性模型 从回归到稀疏
糖尿病病例:这个数据集包括10个生理特征,为 (age, sex, weight, blood pressure) 有442个病例,以及一个一年后的预测值。程序样例:
>>> diabetes = datasets.load_diabetes()
>>> diabetes_X_train = diabetes.data[:-20]
>>> diabetes_X_test = diabetes.data[-20:]
>>> diabetes_y_train = diabetes.target[:-20]
>>> diabetes_y_test = diabetes.target[-20:]
这个任务就是预测疾病。1.使用线性回归
线性模型就是拟合一个线性函数,使模型的残差和最小。样例如下:
>>> from sklearn import linear_model
>>> regr = linear_model.LinearRegression()#线性回归
>>> regr.fit(diabetes_X_train, diabetes_y_train)
LinearRegression(copy_X=True, fit_intercept=True, n_jobs=1, normalize=False)
>>> print(regr.coef_)#输出的应该是参数
[ 0.30349955 -237.63931533 510.53060544 327.73698041 -814.13170937
492.81458798 102.84845219 184.60648906 743.51961675 76.09517222]
>>> # The mean square error均方误差
>>> np.mean((regr.predict(diabetes_X_test)-diabetes_y_test)**2)
2004.56760268...
>>> # 1 代表准确的预测,0代表没有线性关系
>>> regr.score(diabetes_X_test, diabetes_y_test)
0.5850753022690...
不同的算法可以解决同一个问题
1,对于花瓣分类问题,可以选择一个线性函数或者一个 sigmoid函数,构建一个逻辑回归。
logistic = linear_model.LogisticRegression(C=1e5)
>>> logistic.fit(iris_X_train, iris_y_train)
对于多分类的问题,则选择一对多的方法,然后使用启发式的投票方法
对于逻辑回归的稀疏性,则参数C控制着逻辑回归的正则化的规模
2.2.3 Support vector machines (SVMs)
线性的svm
参数C控制着SVM的过拟合程度,较大的C 意味着使用大部分或者所有的数据进行训练,较小的C意味着使用离分隔线最近的数据进行训练
SVMs 可以用于回归,SVR (Support Vector Regression)–, 或者用于分类即SVC (Support Vector Classification).
>>> from sklearn import svm
>>> svc = svm.SVC(kernel='linear')
>>> svc.fit(iris_X_train, iris_y_train)
使用核函数
并不是所有的分类都是线性可分的,可以建立一个多项式函数进行替代。
>>> svc = svm.SVC(kernel='linear')
>>> svc = svm.SVC(kernel='poly',
... degree=3)
# degree: polynomial degree,多项式的层数
径向基核函数
>>> svc = svm.SVC(kernel='rbf')
>>> # gamma: inverse of size of
>>> # radial kernel
2.3 模型的选择
2.3.1 交叉验证
每一个估计量都表现了这个模型对新数据的适应程度
>>> from sklearn import datasets, svm
>>> digits = datasets.load_digits()
>>> X_digits = digits.data
>>> y_digits = digits.target
>>> svc = svm.SVC(C=1, kernel='linear')
>>> svc.fit(X_digits[:-100], y_digits[:-100]).score(X_digits[-100:], y_digits[-100:])
0.97999999999999998
为了得到好的结果,便使用交叉验证比如分别使用一个,第二个或者第三个进行交叉验证,然后输出结果 
>>> import numpy as np
>>> X_folds = np.array_split(X_digits, 3)
>>> y_folds = np.array_split(y_digits, 3)
>>> scores = list()
>>> for k in range(3):
... # We use 'list' to copy, in order to 'pop' later on
... X_train = list(X_folds)
... X_test = X_train.pop(k)
... X_train = np.concatenate(X_train)
... y_train = list(y_folds)
... y_test = y_train.pop(k)
... y_train = np.concatenate(y_train)
... scores.append(svc.fit(X_train, y_train).score(X_test, y_test))
>>> print(scores)
[0.93489148580968284, 0.95659432387312182, 0.93989983305509184]
2.3.2 Cross-validation generators
sklearn有专门的交叉验证生成器,而不用自己选择数据
>>> from sklearn import cross_validation
>>> k_fold = cross_validation.KFold(n=6, n_folds=3)
>>> for train_indices, test_indices in k_fold:
... print('Train: %s | test: %s' % (train_indices, test_indices))
Train: [2 3 4 5] | test: [0 1]
Train: [0 1 4 5] | test: [2 3]
Train: [0 1 2 3] | test: [4 5]
The cross-validation can then be implemented easily:
>>> kfold = cross_validation.KFold(len(X_digits), n_folds=3)
>>> [svc.fit(X_digits[train], y_digits[train]).score(X_digits[test], y_digits[test])
... for train, test in kfold]
当然,sklearn有专门的计算交叉验证的函数
cross_validation.cross_val_score(svc, X_digits, y_digits, cv=kfold, n_jobs=-1)
2.3.3 

2.3.3 Grid-search and cross-validated estimators 网格搜索与交叉验证

sklearn提供网格搜索,可以自行找到最优参数

>>> from sklearn.grid_search import GridSearchCV
>>> Cs = np.logspace(-6, -1, 10)
>>> clf = GridSearchCV(estimator=svc, param_grid=dict(C=Cs),... n_jobs=-1)
>>> clf.fit(X_digits[:1000], y_digits[:1000])GridSearchCV(cv=None,...
>>> clf.best_score_
0.925...
>>> clf.best_estimator_.C
0.0077... 

或者

>>> from sklearn import linear_model, datasets

>>> lasso = linear_model.LassoCV()
>>> diabetes = datasets.load_diabetes()
>>> X_diabetes = diabetes.data

>>> y_diabetes = diabetes.target
>>> lasso.fit(X_diabetes, y_diabetes)
LassoCV(alphas=None, copy_X=True, cv=None, eps=0.001, fit_intercept=True,

    max_iter=1000, n_alphas=100, n_jobs=1, normalize=False, positive=False,
    precompute='auto', random_state=None, selection='cyclic', tol=0.0001,
    verbose=False)

>>> # The estimator chose automatically its lambda:>>> lasso.alpha_
0.01229... 



2.4无监督学习:

2.4.1 聚类

K-means clustering 

>>> from sklearn import cluster, datasets>>> iris = datasets.load_iris()
>>> X_iris = iris.data
>>> y_iris = iris.target

>>> k_means = cluster.KMeans(n_clusters=3)>>> k_means.fit(X_iris)KMeans(copy_x=True, init='k-means++', ...>>> print(k_means.labels_[::10]) 

层次聚类包括从上往下的分裂,或者从下往上的聚类

Principal component analysis: PCA 主成分分析

找到最主要的元素

其他的数据可以用单独的一个数据进行表示

>>> # Create a signal with only 2 useful dimensions

>>> x1 = np.random.normal(size=100)
>>> x2 = np.random.normal(size=100)
>>> x3 = x1 + x2

>>> X = np.c_[x1, x2, x3]

>>> from sklearn import decomposition
>>> pca = decomposition.PCA()
>>> pca.fit(X)
PCA(copy=True, n_components=None, whiten=False)>>> print(pca.explained_variance_)

[  2.18565811e+00   1.19346747e+00   8.43026679e-32]

>>> # As we can see, only the 2 first components are useful>>> pca.n_components = 2
>>> X_reduced = pca.fit_transform(X)
>>> X_reduced.shape

(100, 2) 

Independent Component Analysis: ICA 独立成分分析

>>> # Generate sample data
>>> time = np.linspace(0, 10, 2000)
>>> s1 = np.sin(2 * time) # Signal 1 : sinusoidal signal
>>> s2 = np.sign(np.sin(3 * time)) # Signal 2 : square signal>>> S = np.c_[s1, s2]
>>> S += 0.2 * np.random.normal(size=S.shape) # Add noise
>>> S /= S.std(axis=0) # Standardize data
>>> # Mix data
>>> A = np.array([[1, 1], [0.5, 2]]) # Mixing matrix
>>> X = np.dot(S, A.T) # Generate observations

>>> # Compute ICA
>>> ica = decomposition.FastICA()
>>> S_ = ica.fit_transform(X) # Get the estimated sources>>> A_ = ica.mixing_.T
>>> np.allclose(X, np.dot(S_, A_) + ica.mean_)
True 






 

你可能感兴趣的:(scikit-learn,机器学习,python)