【Scikit-Learn】Scikit-Learn实战

原文:https://www.datacamp.com/community/blog/scikit-learn-cheat-sheet

Scikit-Learn相关资料:

Scikit-Learn介绍机器学习:http://scikit-learn.org/stable/tutorial/basic/tutorial.html

Scikit-Learn用户指南:http://scikit-learn.org/stable/user_guide.html

Scikit-Learn教程:http://scikit-learn.org/stable/tutorial/index.html


简介:

        大多数正在使用Python学习数据科学的人肯定会听说过scikit-learn,这是一个开源的Python库,通过统一的界面,可以实现各种机器学习,预处理,交叉验证和可视化算法。 

      本文将向您介绍您需要经历的基本步骤,以成功实现机器学习算法:您将看到如何加载数据,如何对其进行预处理,如何创建您可以适合的模型您的数据和预测目标标签,如何验证您的模型以及如何进一步调整以提高其性能。 

        Scikit-learn是一个开源的Python库,使用统一的界面实现一系列机器学习,预处理,交叉验证和可视化算法。


实战

一个基本的例子

>>> from sklearn import neighbors, datasets, preprocessing
>>> from sklearn.model_selection import train_test_split
>>> from sklearn.metrics import accuracy_score
>>> iris = datasets.load_iris()
>>> X, y = iris.data[:, :2], iris.target
>>> X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=33)
>>> scaler = preprocessing.StandardScaler().fit(X_train)
>>> X_train = scaler.transform(X_train)
>>> X_test = scaler.transform(X_test)
>>> knn = neighbors.KNeighborsClassifier(n_neighbors=5)
>>> knn.fit(X_train, y_train)
>>> y_pred = knn.predict(X_test)
>>> accuracy_score(y_test, y_pred)


加载数据

您的数据需要以数字形式存储为NumPy数组或SciPy稀疏矩阵。其他可转换为数字数组的类型,例如Pandas DataFrame也是可以接受的。
>>> import numpy as np
>>> X = np.random.random((10,5))
>>> y = np.array(['M','M','F','F','M','F','M','M','F','F','F'])
>>> X[X < 0.7] = 0


预处理数据

标准化

>>> from sklearn.preprocessing import StandardScaler
>>> scaler = StandardScaler().fit(X_train)
>>> standardized_X = scaler.transform(X_train)
>>> standardized_X_test = scaler.transform(X_test)


正常化

>>> from sklearn.preprocessing import Normalizer
>>> scaler = Normalizer().fit(X_train)
>>> normalized_X = scaler.transform(X_train)
>>> normalized_X_test = scaler.transform(X_test)


二值化

>>> from sklearn.preprocessing import Binarizer
>>> binarizer = Binarizer(threshold=0.0).fit(X)
>>> binary_X = binarizer.transform(X)


编码分类特征

>>> from sklearn.preprocessing import LabelEncoder
>>> enc = LabelEncoder()
>>> y = enc.fit_transform(y)


伪造缺失值

>>>from sklearn.preprocessing import Imputer
>>>imp = Imputer(missing_values=0, strategy='mean', axis=0)
>>>imp.fit_transform(X_train)


生成多项式特征

>>> from sklearn.preprocessing import PolynomialFeatures
>>> poly = PolynomialFeatures(5)
>>> oly.fit_transform(X)


训练和测试数据

>>> from sklearn.model_selection import train_test_split
>>> X_train, X_test, y_train, y_test = train_test_split(X,y,random_state=0)


创建你的模型

有监督学习估计器

线性回归

>>> from sklearn.linear_model import LinearRegression
>>> lr = LinearRegression(normalize=True)


支持向量机(SVM)

>>> from sklearn.svm import SVC
>>> svc = SVC(kernel='linear')


朴素贝叶斯

>>> from sklearn.naive_bayes import GaussianNB
>>> gnb = GaussianNB()


KNN

>>> from sklearn import neighbors
>>> knn = neighbors.KNeighborsClassifier(n_neighbors=5)


无监督学习估计

主成分分析(PCA)

>>> from sklearn.decomposition import PCA
>>> pca = PCA(n_components=0.95)


K Means

>>> from sklearn.cluster import KMeans
>>> k_means = KMeans(n_clusters=3, random_state=0)


模型拟合

监督学习

>>> lr.fit(X, y)
>>> knn.fit(X_train, y_train)
>>> svc.fit(X_train, y_train)


无监督学习

>>> k_means.fit(X_train)
>>> pca_model = pca.fit_transform(X_train)


预测

监督估计量

>>> y_pred = svc.predict(np.random.random((2,5)))
>>> y_pred = lr.predict(X_test)
>>> y_pred = knn.predict_proba(X_test))


无监督估计

>>> y_pred = k_means.predict(X_test)



评估你的模型的性能

分类度量

精度分数

>>> knn.score(X_test, y_test)
>>> from sklearn.metrics import accuracy_score
>>> accuracy_score(y_test, y_pred)


分类报告

>>> from sklearn.metrics import classification_report
>>> print(classification_report(y_test, y_pred)))


混淆矩阵

>>> from sklearn.metrics import confusion_matrix
>>> print(confusion_matrix(y_test, y_pred)))



回归度量

平均绝对误差

>>> from sklearn.metrics import mean_absolute_error
>>> y_true = [3, -0.5, 2])
>>> mean_absolute_error(y_true, y_pred))


均方误差

>>> from sklearn.metrics import mean_squared_error
>>> mean_squared_error(y_test, y_pred))


R^ 2分数

>>> from sklearn.metrics import r2_score
>>> r2_score(y_true, y_pred))


聚类度量

调整后的兰特指数

>>> from sklearn.metrics import adjusted_rand_score
>>> adjusted_rand_score(y_true, y_pred))


同质性

>>> from sklearn.metrics import homogeneity_score
>>> homogeneity_score(y_true, y_pred))


V-measure

>>> from sklearn.metrics import v_measure_score
>>> metrics.v_measure_score(y_true, y_pred))


交叉验证

>>> print(cross_val_score(knn, X_train, y_train, cv=4))
>>> print(cross_val_score(lr, X, y, cv=2))


调整你的模型

网格搜索

>>> from sklearn.grid_search import GridSearchCV
>>> params = {"n_neighbors": np.arange(1,3), "metric": ["euclidean", "cityblock"]}
>>> grid = GridSearchCV(estimator=knn,param_grid=params)
>>> grid.fit(X_train, y_train)
>>> print(grid.best_score_)
>>> print(grid.best_estimator_.n_neighbors)


随机参数优化

>>> from sklearn.grid_search import RandomizedSearchCV
>>> params = {"n_neighbors": range(1,5), "weights": ["uniform", "distance"]}
>>> rsearch = RandomizedSearchCV(estimator=knn,
   param_distributions=params,
   cv=4,
   n_iter=8,
   random_state=5)
>>> rsearch.fit(X_train, y_train)
>>> print(rsearch.best_score_)


走得更远

从我们针对初学者的scikit-learn教程开始  ,您将通过简单,循序渐进的方式了解如何探索手写数字数据,如何为其创建模型,如何使数据适合您的模型和如何预测目标值。另外,您将使用Python的数据可视化库matplotlib来可视化您的结果。

你可能感兴趣的:(Machine,Learning,机器学习算法理论与实战)