1、调用sklearn中的LogisticRegression:
class sklearn.linear_model.
LogisticRegression
(penalty=’l2’, dual=False, tol=0.0001, C=1.0, fit_intercept=True,intercept_scaling=1, class_weight=None, random_state=None, solver=’liblinear’, max_iter=100, multi_class=’ovr’,verbose=0, warm_start=False, n_jobs=1)
对于多类:
(1)如果参数“multi_class=‘ovr’ ”,分类方法使用one-vs-rest (OvR,一对多);
(2)如果参数“multi_class=‘multinomial’ ”,损失函数使用cross-entropy loss(目前支持‘multinomial’的优化方法有‘lbfgs’,‘sag’,‘newton-cg’)。
注:其它用法见http://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html#sklearn.linear_model.LogisticRegression
使用sklearn中的LogisticRegression:
>>>from sklearn.linear_model import LogisticRegression
>>>a=LogisticRegression() #参数为默认参数
>>>a.fit(train_feature, label) #将训练特征和对应标签导入进行训练
>>>test['label']=a.predict(test_feature) #将测试数据特征输入获得输出
2、属性
(1)coef_:分类决策函数中的特征系数,大小为[1,n_features]和[n_classes,n_features]。
(2)intercept_:分类决策函数中的偏置项,大小为[1,]和[n_classes,]
(3)n_iter_:array, shape (n_classes,) or (1, ) Actual number of iterations for all classes. If binary or multinomial, it returns only 1 element. For liblinear solver, only the maximum number of iteration across all classes is given.