35题初探scikit-learn库,get机器学习好帮手

hi,大家好,X题系列又与大家见面了~

这次是scikit-learn库。scikit-learn 是基于 Python 语言的机器学习工具简单高效的数据挖掘和数据分析工具。和pandas,numpy共称机器学习3巨头(又是我自己想的[呲牙笑])

点击此处,不用配环境,就可以在线运行代码

35题初探scikit-learn库,get机器学习好帮手

萌新可以参考以下教程:

  • Python机器学习笔记:sklearn库的学习
  • scikit-learn (sklearn) 官方文档中文版
  • scikit-learn Machine Learning in Python

其他x题系列:

  • 50道练习带你玩转Pandas
  • 这100道练习,带你玩转Numpy
  • 35题初探scikit-learn库,get机器学习好帮手√
  • 50题matplotlib从入门到精通
  • 40题刷爆Keras,人生苦短我选Keras
  • 60题PyTorch简易入门指南,做技术的弄潮儿
  • 50题真 • 一文入门TensorFlow2.x
  • 90题细品吴恩达《机器学习》,感受被刷题支配的恐惧
  • 170题吴恩达《深度学习》面面观,一套更比三套强
  • 【抗击新冠特别篇】33题数据可视化实战

一、获取数据

1.导入sklearn的数据集模块

from sklearn import datasets

2.导入预置的手写数字数据集

import matplotlib.pyplot as plt

digits = datasets.load_digits()

plt.matshow(digits.images[0])
plt.show()

3.生成数据用于聚类,100个样本,2个特征,5个类

data, label = datasets.make_blobs(n_samples=100, n_features=2, centers=5)

plt.scatter(data[:, 0], data[:, 1], c=label)
plt.show()

二、数据预处理

4.导入sklearn的预处理模块

from sklearn import preprocessing

5.计算一组数据的平均值和标准差(scaler)

data = [[0, 0], [1, 0], [-1, 1], [1, 2]]

scalerstd = preprocessing.StandardScaler().fit(data)

print(scalerstd.mean_)
print(scalerstd.var_)

6.使用上一题的scaler标准化现有数据

scalerstd.transform(data)

7.用最小最大规范化对数据进行线性变换,变换到[0,1]区间

scalermm = preprocessing.MinMaxScaler(feature_range=(0, 1)).fit(data)
scalermm.transform(data)

8.用L2正则化对数据进行变换

X = [[ 1., -1.,  2.],
    [ 2.,  0.,  0.],
    [ 0.,  1., -1.]]

scalernorm = preprocessing.Normalizer(norm='l2').fit(X)
scalernorm.transform(X)

# 方法二

# preprocessing.normalize(X, norm='l2')

9.对现有数据进行one-hot编码

data = [[0, 0, 3], [1, 1, 0], [0, 2, 1], [1, 0, 2]]

scaleronehot = preprocessing.OneHotEncoder().fit(data)
scaleronehot.transform(data).toarray()

10.给定阈值,将特征转换为0/1

data = [[0, 0], [1, 0], [-1, 1], [1, 2]]

scalerbin = preprocessing.Binarizer(threshold=0.5)
scalerbin.transform(data)

11.对现有数据进行标签编码

scalerlabel = preprocessing.LabelEncoder()
scalerlabel.fit(["paris", "paris", "tokyo", "amsterdam"])
scalerlabel.transform(["tokyo", "tokyo", "paris"])

三、数据及拆分

12.将现有数据划分为训练集和测试集,测试集数量占比为30%

from sklearn import model_selection

dataset = datasets.load_boston()
print(dataset.data.shape)

X_train, X_test, y_train, y_test = model_selection.train_test_split(dataset['data'], dataset['target'], test_size=0.3)

print(X_train.shape, X_test.shape, y_train.shape, y_test.shape)

13.将现有数据划分为3折

import numpy as np
X = np.array(['a', 'b', 'c', 'd','e','f'])

kfold = model_selection.KFold(n_splits=3)

for train, test in kfold.split(X):
    print("%s %s" % (X[train], X[test]))

四、使用模型

14.定义一个线性回归模型

from sklearn.linear_model import LinearRegression
model = LinearRegression(fit_intercept=True, normalize=False,copy_X=True, n_jobs=1)

15.导入预置的波士顿房价数据集

X,y = datasets.load_boston(return_X_y=True)

16.设置房价为Y,剩余参数为X,30%为测试集

X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3)

17.用线性回归模型拟合波士顿房价数据集

model.fit(X_train, y_train)

18.用训练完的模型进行预测

model.predict(X_test)

19.输出线性回归模型的斜率和截距

print(model.coef_)
print(model.intercept_)

五、评估

20.使用随机森林对鸢尾花数据集进行预测,n_estimators=100

from sklearn.ensemble import RandomForestClassifier
X,y = datasets.load_iris(return_X_y=True)
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3)

clf = RandomForestClassifier(n_estimators=100)
clf.fit(X_train, y_train)

y_pred = clf.predict(X_test)
y_pred

21.给产生的随机森林模型打分

clf.score(X,y)

22.输出模型的recision_score、recall_score、f1_score

from sklearn.metrics import classification_report
print(classification_report(y_test, y_pred))

23.使用手写数字数据集,随机森林算法进行分类,画出学习曲线

from sklearn.model_selection import learning_curve

X,y = datasets.load_digits(return_X_y=True)

train_sizes,train_score,test_score = learning_curve(RandomForestClassifier(n_estimators = 10),X,y,train_sizes=[0.1,0.2,0.4,0.6,0.8,1],scoring='accuracy')

train_error =  1- np.mean(train_score,axis=1)
test_error = 1- np.mean(test_score,axis=1)

plt.plot(train_sizes,train_error,'o-',color = 'r',label = 'training')
plt.plot(train_sizes,test_error,'o-',color = 'g',label = 'testing')

24.使用手写数字数据集,随机森林算法进行分类,参数n_estimators选择范围为[10,20,40,80,160,250],画出验证曲线

from sklearn.model_selection import validation_curve

X,y = datasets.load_digits(return_X_y=True)
param_range = [10,20,40,80,160,250]

train_score,test_score = validation_curve(RandomForestClassifier(),X,y,param_name='n_estimators',param_range=param_range,cv=10,scoring='accuracy')

train_score =  np.mean(train_score,axis=1)
test_score = np.mean(test_score,axis=1)

plt.plot(param_range,train_score,'o-',color = 'r',label = 'training')
plt.plot(param_range,test_score,'o-',color = 'g',label = 'testing')

25.使用网格搜索,得手写数字数据集+随机森林算法的最优参数

from sklearn.model_selection import GridSearchCV
X,y = datasets.load_digits(return_X_y=True)
# 随机森林的参数
tree_param_grid={'min_samples_split':[3,6,9],'n_estimators':[10,50,100]}
grid=GridSearchCV(RandomForestClassifier(),param_grid=tree_param_grid,cv=5)
grid.fit(X,y)
grid.best_estimator_,grid.best_score_,grid.best_params_

26.取鸢尾花数据集中,类型=0,1的数据,变成一个二分类问题,使用线性内核的SVM进行拟合

from sklearn import svm
X,y = datasets.load_iris(return_X_y=True)
X, y = X[y != 2], y[y != 2]

# 加一些噪音,不然模型太准了
random_state = np.random.RandomState(0)
n_samples, n_features = X.shape
X = np.c_[X, random_state.randn(n_samples, 200 * n_features)]

X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=.3,random_state=0)

svm = svm.SVC(kernel='linear', probability=True)

svm.fit(X_train, y_train)

27.计算出对应的ROC曲线

from sklearn.metrics import roc_curve, auc

y_score = svm.decision_function(X_test)

fpr,tpr,threshold = roc_curve(y_test, y_score)

plt.plot(fpr, tpr, color='darkorange')
plt.plot([0, 1], [0, 1], color='navy', linestyle='--')

28.计算出对应的AUC

roc_auc = auc(fpr,tpr) 
roc_auc

六、降维

29.生成一组数据,10000个样本,3个特征,4个簇

from mpl_toolkits.mplot3d import Axes3D
from sklearn.decomposition import PCA

X, y = datasets.make_blobs(n_samples=10000, n_features=3, centers=[[3,3, 3], [0,0,0], [1,1,1], [2,2,2]], cluster_std=[0.2, 0.1, 0.2, 0.2], 
                  random_state =9)

fig = plt.figure()
ax = Axes3D(fig, rect=[0, 0, 1, 1], elev=30, azim=20)
plt.scatter(X[:, 0], X[:, 1], X[:, 2],marker='o')

30.对数据进行pca同纬度数量的投影,展示投影后的三个维度的分布

pca = PCA(n_components=3)
pca.fit(X)
print (pca.explained_variance_ratio_)
print (pca.explained_variance_)

31.将3维数据降到2维

pca = PCA(n_components=2)
pca.fit(X)
print (pca.explained_variance_ratio_)
print (pca.explained_variance_)

X_new = pca.transform(X)
plt.scatter(X_new[:, 0], X_new[:, 1],marker='o')

32.指定降维后主成分方差的比例(99%)进行降维

pca = PCA(n_components=0.99)
pca.fit(X)
print (pca.explained_variance_ratio_)
print (pca.explained_variance_)
print (pca.n_components_)

33.使用MLE算法,自动选择降维维度

pca = PCA(n_components='mle')
pca.fit(X)
print (pca.explained_variance_ratio_)
print (pca.explained_variance_)
print (pca.n_components_)

34.生成一组数据,10000个样本,3个特征,4个类别

X2, y2 = datasets.make_classification(n_samples=1000, n_features=3, n_redundant=0, n_classes=3, n_informative=2,
                           n_clusters_per_class=1,class_sep =0.5, random_state =10)
fig = plt.figure()
ax = Axes3D(fig, rect=[0, 0, 1, 1], elev=30, azim=20)
ax.scatter(X2[:, 0], X2[:, 1], X2[:, 2],marker='o',c=y2)

35.对数据进行LDA降维

from sklearn.discriminant_analysis import LinearDiscriminantAnalysis
lda = LinearDiscriminantAnalysis(n_components=2)
lda.fit(X2,y)
X2_new = lda.transform(X)
plt.scatter(X2_new[:, 0], X2_new[:, 1],marker='o',c=y2)

点击此处,不用配环境,就可以在线运行代码

35题初探scikit-learn库,get机器学习好帮手

你可能感兴趣的:(机器学习,python,数据挖掘)