逻辑回归函数位于Sklearn模块库的linear_model线性回归子模块,其接口
LogisticRegression(penalty = '12',dual = False,tol = 0.0001,c = 1.0,fit_intercept = True,intercept_scaling = 1,class_weight = None,random_state = None,solver = 'Liblinear',max_iter = 100,multi_class = 'ovr',verbose = 0,warm_start = False,n_jobs = 1)
读取训练数据和测试数据
fs0='dat/iris_'
print('\n1# init,fs0,',fs0)
x_train=pd.read_csv(fs0+'xtrain.csv',index_col=False);
y_train=pd.read_csv(fs0+'ytrain.csv',index_col=False);
x_test=pd.read_csv(fs0+'xtest.csv',index_col=False)
y_test=pd.read_csv(fs0+'ytest.csv',index_col=False)
逻辑回归建模
print('\n2# 建模')
mx =zai.mx_log(x_train.values,y_train.values)
这里逻辑回归函数
def mx_log(train_x,train_y):
mx = LogisticRegression(penalty = '12')
mx.fit(train_x,train_y)
return mx
预测
print('\n3# 预测')
y_pred = mx.predict(x_test.values)
df9['y_predsr']=y_pred
df9['y_test'],df9['y_pred']=y_test,y_pred
df9['y_pred']=round(df9['y_predsr']).astype(int)
保存数据并显示
df9.to_csv('tmp/iris_9.csv',index=False)
print('\n4# df9')
print(df9.tail())
显示结果
4# df9
x1 x2 x3 x4 y_predsr y_test y_pred
33 6.4 2.8 5.6 2.1 1 1 1
34 5.8 2.8 5.1 2.4 1 1 1
35 5.3 3.7 1.5 0.2 2 2 2
36 5.5 2.3 4.0 1.3 3 3 3
37 5.2 3.4 1.4 0.2 2 2 2
检验测试结果
#5
dacc=zai.ai_acc_xed(df9,1,False)
print('\n5# mx:mx_sum,kok:{0:.2f}%'.format(dacc))
输出
5# mx:mx_sum,kok:84.21%
相比线性回归 44.74%,逻辑回归 84.21% 比较高
贝叶斯分类是一系列分类算法的总称,这类算法均以贝叶斯为基础。该算法一个简单的假设:给定目标值,属性之间相互条件独立。这里介绍的是多项式朴素贝叶斯算法,函数名 MultinomiaNB 位于 naive_bayes 模块,接口是
MultinomialNB(alpha = 1.0,fit_prior = True,class_prior = None)
建模、预测
print('\n2# 建模')
mx =zai.mx_bayes(x_train.values,y_train.values)
#3
print('\n3# 预测')
y_pred = mx.predict(x_test.values)
df9['y_predsr']=y_pred
df9['y_test'],df9['y_pred']=y_test,y_pred
df9['y_pred']=round(df9['y_predsr']).astype(int)
mx_bayes() 函数
def mx_bayes(train_x,train_y):
mx = MultionmainalNB(alpha = 0.01)
mx.fit(train_x,train_y)
return mx
保存数据并显示
#4
df9.to_csv('tmp/iris_9.csv',index=False)
print('\n4# df9')
print(df9.tail())
输出
4# df9
x1 x2 x3 x4 y_predsr y_test y_pred
33 6.4 2.8 5.6 2.1 1 1 1
34 5.8 2.8 5.1 2.4 1 1 1
35 5.3 3.7 1.5 0.2 2 2 2
36 5.5 2.3 4.0 1.3 1 3 1
37 5.2 3.4 1.4 0.2 2 2 2
测试结果检验
#5
dacc=zai.ai_acc_xed(df9,1,False)
print('\n5# mx:mx_sum,kok:{0:.2f}%'.format(dacc))
输出
5# mx:mx_sum,kok:57.89%
朴素贝叶斯的 57.89% 高于线性回归的 44.74%,低于逻辑回归的 84.21%
又叫 K 最近邻分类算法,所谓 K 最近邻就是 K 个最近的邻居的意思,即每个样本都可以用它最接近的 K 个邻居来代表
KNN 近邻算法,位于Neighbors 模块,函数名是 KNeighborsClassifier,接口是
KNeighborsClassifier(n_neighbors = 5,weight = 'uniform',algorithm = 'auto',leaf_size = 30,p = 2,metric = 'minkowski',metric_params = None,n_jobs = 1,**kwargs)
建模、预测
print('\n2# 建模')
mx =zai.mx_knn(x_train.values,y_train.values)
#3
print('\n3# 预测')
y_pred = mx.predict(x_test.values)
df9['y_predsr']=y_pred
df9['y_test'],df9['y_pred']=y_test,y_pred
df9['y_pred']=round(df9['y_predsr']).astype(int)
mx_knn() 函数
def mx_knn(train_x,train_y):
mx = KNeighborsClassifier()
mx.fit(train_x,train_y)
return mx
保存数据并显示
#4
df9.to_csv('tmp/iris_9.csv',index=False)
print('\n4# df9')
print(df9.tail())
输出
4# df9
x1 x2 x3 x4 y_predsr y_test y_pred
33 6.4 2.8 5.6 2.1 1 1 1
34 5.8 2.8 5.1 2.4 1 1 1
35 5.3 3.7 1.5 0.2 2 2 2
36 5.5 2.3 4.0 1.3 3 3 3
37 5.2 3.4 1.4 0.2 2 2 2
检验测试结果
#5
dacc=zai.ai_acc_xed(df9,1,False)
print('\n5# mx:mx_sum,kok:{0:.2f}%'.format(dacc))
输出
5# mx:mx_sum,kok:100.00%
KNN 近邻算法居然达到了 100% ,但是是基于爱丽丝 iris 数据的,数据量偏低。
随机森林算法,是指利用多棵树对样本进行训练并预测的一种算法,包含多个决策树,并且其输出的类别是由个别树输出的类别的众数而定。
随机森林算法,位于 Ensemble 集成算法模型,函数名是 RandomForestClassifier,函数接口
RandomForestClassifier(n_estimators = 10,criterion = 'gini',max_depth = None,min_samplws_split = 2,min_samples_leaf = 1,min_weight_fraction_leaf = 0.0,max_features = 'auto',max_leaf_nodes = None,min_impurity_split = 1e-07,bootstrap = True,oob_score = False,n_jobs = 1,random_state = None,verbose = 0,warm_start = False,class_weight = None)
建模、预测
#2
print('\n2# 建模')
mx =zai.mx_forest(x_train.values,y_train.values)
#3
print('\n3# 预测')
y_pred = mx.predict(x_test.values)
df9['y_predsr']=y_pred
df9['y_test'],df9['y_pred']=y_test,y_pred
df9['y_pred']=round(df9['y_predsr']).astype(int)
mx_forest() 函数
def mx_forest(train_x,train_y):
mx = RandomForestClassifier(n_estimators = 8)
mx.fit(train_x,train_y)
return mx
保存并显示
#4
df9.to_csv('tmp/iris_9.csv',index=False)
print('\n4# df9')
print(df9.tail())
输出
4# df9
x1 x2 x3 x4 y_predsr y_test y_pred
33 6.4 2.8 5.6 2.1 1 1 1
34 5.8 2.8 5.1 2.4 1 1 1
35 5.3 3.7 1.5 0.2 2 2 2
36 5.5 2.3 4.0 1.3 3 3 3
37 5.2 3.4 1.4 0.2 2 2 2
检验测试结果
#5
dacc=zai.ai_acc_xed(df9,1,False)
print('\n5# mx:mx_sum,kok:{0:.2f}%'.format(dacc))
输出
5# mx:mx_sum,kok:97.37%
随机森林算法预测结果是 97.37%
Python机器学习入门源代码和数据集 请点这里