【阿旭机器学习实战】【9】随机梯度下降(SGD)进行乳腺癌良恶性预测,并与逻辑斯蒂回归预测结果进行对比

【阿旭机器学习实战】系列文章主要介绍机器学习的各种算法模型及其实战案例,欢迎点赞,关注共同学习交流。

本文使用机器学习中的随机梯度下降(SGD)进行乳腺癌良恶性预测,并将其与逻辑斯蒂回归预测结果进行对比。

目录

  • 梯度下降模型(SGD)---乳腺癌良恶性预测
    • 读取数据
    • 清洗数据,把缺失的数据删除掉
    • 提取特征与标签
    • 数据的标准化
    • 生成算法
      • 随机梯度下降
      • 逻辑斯蒂回归

梯度下降模型(SGD)—乳腺癌良恶性预测

读取数据

import pandas as pd
import numpy as np
cancer = pd.read_csv("../data/cencerData.csv")
cancer.head(10)
Sample code number Clump Thickness Uniformity of Cell Size Uniformity of Cell Shape Marginal Ashesion Single Epithelial Cell Size Bare Nuclei Bland Chromatin Normal Nucleoli Mitoses Class
0 1000025 5 1 1 1 2 1 3 1 1 2
1 1002945 5 4 4 5 7 10 3 2 1 2
2 1015425 3 1 1 1 2 2 3 1 1 2
3 1016277 6 8 8 1 3 4 3 7 1 2
4 1017023 4 1 1 3 2 1 3 1 1 2
5 1017122 8 10 10 8 7 10 9 7 1 4
6 1018099 1 1 1 1 2 10 3 1 1 2
7 1018561 2 1 2 1 2 1 3 1 1 2
8 1033078 2 1 1 1 2 1 1 1 5 2
9 1033078 4 2 1 1 2 1 2 1 1 2

特征说明:
Sample code number 索引ID ;
Clump Thickness 肿瘤厚度;
Uniformity of Cell Size 细胞大小均匀性;
Uniformity of Cell Shape 细胞形状均匀性;
Marginal Adhesion 边缘粘附力;
Single Epithelial Cell Size 单上皮细胞大小;
Bare Nuclei 裸核;
Bland Chromatin 染色质的颜色;
Normal Nucleoli 核仁正常情况;
Mitoses 有丝分裂情况;
Class 分类情况,2为良性,4为恶性;

cancer.shape
(699, 11)

清洗数据,把缺失的数据删除掉

# 把?数据替换成nan
cancer.replace({"?":np.nan},inplace=True)
cancer.isnull().any()
Sample code number             False
Clump Thickness                False
Uniformity of Cell Size        False
Uniformity of Cell Shape       False
Marginal Ashesion              False
Single Epithelial Cell Size    False
Bare Nuclei                     True
Bland Chromatin                False
Normal Nucleoli                False
Mitoses                        False
Class                          False
dtype: bool
# Bare Nuclei列有空值,删除含空值的行
cencer.dropna(how="any",axis=0,inplace=True)

提取特征与标签

# 训练数据
x = cencer.iloc[:,1:10]
# 标签数据
y = cencer[["Class"]]
x.shape
(683, 9)
# 切分训练数据和测试数据
x_train,x_test,y_train,y_test =  train_test_split(x,y,test_size=0.1)

数据的标准化

针对于线性问题:
把所有数据整合成一个服从标准正太分布的数据,目的为了防止某些过大的特征对算法起主导作用

from sklearn.preprocessing import StandardScaler
# 创建数据标准化算法模型
ss = StandardScaler()
x_train = ss.fit_transform(x_train)
x_test = ss.transform(x_test)
x_train
array([[-1.21502973, -0.70618437, -0.74137802, ..., -0.99142625,
        -0.60457767, -0.35589636],
       [-1.21502973, -0.70618437, -0.74137802, ..., -0.17542663,
        -0.60457767, -0.35589636],
       [ 1.97463977,  2.24345648,  2.26659182, ...,  2.27257224,
         2.3363521 , -0.35589636],
       ..., 
       [-0.15180657, -0.70618437, -0.74137802, ..., -0.58342644,
        -0.60457767, -0.35589636],
       [-0.50621429, -0.70618437, -0.74137802, ..., -0.99142625,
        -0.60457767, -0.35589636],
       [-1.21502973, -0.70618437, -0.74137802, ...,  0.64057299,
        -0.60457767, -0.35589636]])

生成算法

随机梯度下降

from sklearn.linear_model import SGDClassifier
sgd = SGDClassifier(alpha=0.001)
C:\Anaconda3\lib\site-packages\sklearn\linear_model\stochastic_gradient.py:84: FutureWarning: max_iter and tol parameters have been added in  in 0.19. If both are left unset, they default to max_iter=5 and tol=None. If tol is not None, max_iter defaults to max_iter=1000. From 0.21, default max_iter will be 1000, and default tol will be 1e-3.
  "and default tol will be 1e-3." % type(self), FutureWarning)
sgd.fit(x_train,y_train)
sgd.score(x_test,y_test)
0.97101449275362317

逻辑斯蒂回归

# 构建模型
lgr = LogisticRegression()
# 训练
lgr.fit(x_train,y_train)
# 查看模型的准确率
lgr.score(x_test,y_test)
0.95652173913043481

如果内容对你有帮助,感谢记得点赞+关注哦!

更多干货内容持续更新中…

你可能感兴趣的:(机器学习,回归,随机梯度下降,SGD)