原创
python-机器学习-决策树实现
2019-11-09 14:59:39 晒冷-
阅读数 64
更多
分类专栏: Python学习
"""
Created on Sat Nov 9 10:42:38 2019
@author: asus
“”"
“”"
决策树
目的:
- 使用决策树模型
- 了解决策树模型的参数
- 初步了解调参数
要求:
基于乳腺癌数据集完成以下任务:
1.调整参数criterion,使用不同算法信息熵(entropy)和基尼不纯度算法(gini)
2.调整max_depth参数值,查看不同的精度
3.根据参数criterion和max_depth得出你初步的结论。
“”"
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import mglearn
from sklearn.model_selection import train_test_split
from sklearn.datasets import load_breast_cancer
from sklearn.tree import DecisionTreeClassifier
cancer = load_breast_cancer()
X_train,X_test,y_train,y_test = train_test_split(cancer.data,cancer.target,
stratify = cancer.target,random_state = 42)
print(‘train dataset :{0} ;test dataset :{1}’.format(X_train.shape,X_test.shape))
tree = DecisionTreeClassifier(random_state = 0,criterion = ‘gini’,max_depth = 5)
tree.fit(X_train,y_train)
print(“Accuracy(准确性) on training set: {:.3f}”.format(tree.score(X_train, y_train)))
print(“Accuracy(准确性) on test set: {:.3f}”.format(tree.score(X_test, y_test)))
print(tree)
def Tree_score(depth = 3,criterion = ‘entropy’):
“”"
参数为max_depth(默认为3)和criterion(默认为信息熵entropy),
函数返回模型的训练精度和测试精度
“”"
tree = DecisionTreeClassifier(criterion = criterion,max_depth = depth)
tree.fit(X_train,y_train)
train_score = tree.score(X_train, y_train)
test_score = tree.score(X_test, y_test)
return (train_score,test_score)
depths = range(2,25)
scores = [Tree_score(d,‘gini’) for d in depths]
train_scores = [s[0] for s in scores]
test_scores = [s[1] for s in scores]
plt.figure(figsize = (6,6),dpi = 144)
plt.grid()
plt.xlabel(“max_depth of decision Tree”)
plt.ylabel(“score”)
plt.title("‘gini’")
plt.plot(depths,train_scores,’.g-’,label = ‘training score’)
plt.plot(depths,test_scores,’.r–’,label = ‘testing score’)
plt.legend()
scores = [Tree_score(d) for d in depths]
train_scores = [s[0] for s in scores]
test_scores = [s[1] for s in scores]
plt.figure(figsize = (6,6),dpi = 144)
plt.grid()
plt.xlabel(“max_depth of decision Tree”)
plt.ylabel(“score”)
plt.title("‘entropy’")
plt.plot(depths,train_scores,’.g-’,label = ‘training score’)
plt.plot(depths,test_scores,’.r–’,label = ‘testing score’)
plt.legend()
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- 10
- 11
- 12
- 13
- 14
- 15
- 16
- 17
- 18
- 19
- 20
- 21
- 22
- 23
- 24
- 25
- 26
- 27
- 28
- 29
- 30
- 31
- 32
- 33
- 34
- 35
- 36
- 37
- 38
- 39
- 40
- 41
- 42
- 43
- 44
- 45
- 46
- 47
- 48
- 49
- 50
- 51
- 52
- 53
- 54
- 55
- 56
- 57
- 58
- 59
- 60
- 61
- 62
- 63
- 64
- 65
- 66
- 67
- 68
- 69
- 70
- 71
- 72
- 73
- 74
- 75
- 76
- 77
- 78
- 79
- 80
- 81
- 82
- 83
- 84
- 85
- 86
- 87
- 88
- 89
- 90
- 91
- 92
运行结果:
很明显看的出来,决策树深度越大,训练集拟合效果越好,但是往往面对测试集的预测效果会下降,这就是过拟合。
参考书籍:《Python机器学习基础教程》