本文分为英文板和中文版,英文板在前中文版在后,望周知。This paper is divided into English and Chinese versions.
English:
class sklearn.linear_model.
LogisticRegression
(penalty=’l2’, dual=False, tol=0.0001, C=1.0, fit_intercept=True, intercept_scaling=1, class_weight=None, random_state=None, solver=’warn’, max_iter=100, multi_class=’warn’, verbose=0, warm_start=False, n_jobs=None)[source]
Logistic Regression (aka logit, MaxEnt) classifier.
In the multiclass case, the training algorithm uses the one-vs-rest (OvR) scheme if the ‘multi_class’ option is set to ‘ovr’, and uses the cross- entropy loss if the ‘multi_class’ option is set to ‘multinomial’. (Currently the ‘multinomial’ option is supported only by the ‘lbfgs’, ‘sag’ and ‘newton-cg’ solvers.)
This class implements regularized logistic regression using the ‘liblinear’ library, ‘newton-cg’, ‘sag’ and ‘lbfgs’ solvers. It can handle both dense and sparse input. Use C-ordered arrays or CSR matrices containing 64-bit floats for optimal performance; any other input format will be converted (and copied).
The ‘newton-cg’, ‘sag’, and ‘lbfgs’ solvers support only L2 regularization with primal formulation. The ‘liblinear’ solver supports both L1 and L2 regularization, with a dual formulation only for the L2 penalty.
Read more in the User Guide.
Parameters: | penalty : str, ‘l1’ or ‘l2’, default: ‘l2’ Used to specify the norm used in the penalization. The ‘newton-cg’, ‘sag’ and ‘lbfgs’ solvers support only l2 penalties. New in version 0.19: l1 penalty with SAGA solver (allowing ‘multinomial’ + L1) dual : bool, default: False Dual or primal formulation. Dual formulation is only implemented for l2 penalty with liblinear solver. Prefer dual=False when n_samples > n_features. tol : float, default: 1e-4 Tolerance for stopping criteria. C : float, default: 1.0 Inverse of regularization strength; must be a positive float. Like in support vector machines, smaller values specify stronger regularization. fit_intercept : bool, default: True Specifies if a constant (a.k.a. bias or intercept) should be added to the decision function. intercept_scaling : float, default 1. Useful only when the solver ‘liblinear’ is used and self.fit_intercept is set to True. In this case, x becomes [x, self.intercept_scaling], i.e. a “synthetic” feature with constant value equal to intercept_scaling is appended to the instance vector. The intercept becomes Note! the synthetic feature weight is subject to l1/l2 regularization as all other features. To lessen the effect of regularization on synthetic feature weight (and therefore on the intercept) intercept_scaling has to be increased. class_weight : dict or ‘balanced’, default: None Weights associated with classes in the form The “balanced” mode uses the values of y to automatically adjust weights inversely proportional to class frequencies in the input data as Note that these weights will be multiplied with sample_weight (passed through the fit method) if sample_weight is specified. New in version 0.17: class_weight=’balanced’ random_state : int, RandomState instance or None, optional, default: None The seed of the pseudo random number generator to use when shuffling the data. If int, random_state is the seed used by the random number generator; If RandomState instance, random_state is the random number generator; If None, the random number generator is the RandomState instance used by np.random. Used when solver : str, {‘newton-cg’, ‘lbfgs’, ‘liblinear’, ‘sag’, ‘saga’}, default: ‘liblinear’. Algorithm to use in the optimization problem.
Note that ‘sag’ and ‘saga’ fast convergence is only guaranteed on features with approximately the same scale. You can preprocess the data with a scaler from sklearn.preprocessing. New in version 0.17: Stochastic Average Gradient descent solver. New in version 0.19: SAGA solver. Changed in version 0.20: Default will change from ‘liblinear’ to ‘lbfgs’ in 0.22. max_iter : int, default: 100 Useful only for the newton-cg, sag and lbfgs solvers. Maximum number of iterations taken for the solvers to converge. multi_class : str, {‘ovr’, ‘multinomial’, ‘auto’}, default: ‘ovr’ If the option chosen is ‘ovr’, then a binary problem is fit for each label. For ‘multinomial’ the loss minimised is the multinomial loss fit across the entire probability distribution, even when the data is binary. ‘multinomial’ is unavailable when solver=’liblinear’. ‘auto’ selects ‘ovr’ if the data is binary, or if solver=’liblinear’, and otherwise selects ‘multinomial’. New in version 0.18: Stochastic Average Gradient descent solver for ‘multinomial’ case. Changed in version 0.20: Default will change from ‘ovr’ to ‘auto’ in 0.22. verbose : int, default: 0 For the liblinear and lbfgs solvers set verbose to any positive number for verbosity. warm_start : bool, default: False When set to True, reuse the solution of the previous call to fit as initialization, otherwise, just erase the previous solution. Useless for liblinear solver. See the Glossary. New in version 0.17: warm_start to support lbfgs, newton-cg, sag, saga solvers. n_jobs : int or None, optional (default=None) Number of CPU cores used when parallelizing over classes if multi_class=’ovr’”. This parameter is ignored when the |
---|---|
Attributes: | coef_ : array, shape (1, n_features) or (n_classes, n_features) Coefficient of the features in the decision function. coef_ is of shape (1, n_features) when the given problem is binary. In particular, when multi_class=’multinomial’, coef_ corresponds to outcome 1 (True) and -coef_ corresponds to outcome 0 (False). intercept_ : array, shape (1,) or (n_classes,) Intercept (a.k.a. bias) added to the decision function. If fit_intercept is set to False, the intercept is set to zero. intercept_ is of shape (1,) when the given problem is binary. In particular, when multi_class=’multinomial’, intercept_ corresponds to outcome 1 (True) and -intercept_ corresponds to outcome 0 (False). n_iter_ : array, shape (n_classes,) or (1, ) Actual number of iterations for all classes. If binary or multinomial, it returns only 1 element. For liblinear solver, only the maximum number of iteration across all classes is given. Changed in version 0.20: In SciPy <= 1.0.0 the number of lbfgs iterations may exceed |
中文版:
Logistic回归(aka logit,MaxEnt)分类器。
在多类情况下,如果'multi_class'选项设置为'ovr',训练算法使用one-vs-rest(OvR)方案,如果'multi_class'选项设置为'multincial',则使用交叉熵损失”。(目前,'multinomial'选项仅受'lbfgs','sag'和'newton-cg'解算器的支持。)
该类使用'liblinear'库,'newton-cg','sag'和'lbfgs'求解器实现正则化逻辑回归。它可以处理密集和稀疏输入。使用包含64位浮点数的C有序数组或CSR矩阵以获得最佳性能; 任何其他输入格式将被转换(和复制)。
'newton-cg','sag'和'lbfgs'求解器仅支持原始公式的L2正则化。'liblinear'解算器支持L1和L2正则化,双重公式仅用于L2惩罚。
阅读用户指南中的更多内容。
参数: | panalty : str,'l1'或'l2',默认:'l2' 用于指定惩罚中使用的规范。'newton-cg','sag'和'lbfgs'解算器只支持l2惩罚。 版本0.19中的新功能:使用SAGA求解器的l1惩罚(允许'多项'+ L1) dual : bool,默认值:False 双重或原始配方。双配方仅用于利用liblinear解算器的l2惩罚。当n_samples> n_features时,首选dual = False。 tol : float,默认值:1e-4 容忍停止标准。 C : float,默认值:1.0 正规化强度逆; 必须是积极的浮动。与支持向量机一样,较小的值指定更强的正则化。 fit_intercept : bool,默认值:True 指定是否应将常量(也称为偏差或截距)添加到决策函数中。 intercept_scaling : float,默认值为1。 仅在使用求解器“liblinear”且self.fit_intercept设置为True时有用。在这种情况下,x变为[x,self.intercept_scaling],即具有等于intercept_scaling的常数值的“合成”特征被附加到实例向量。截距变成了。 注意!合成特征权重与所有其他特征一样经受l1 / l2正则化。为了减小正则化对合成特征权重(并因此对截距)的影响,必须增加intercept_scaling。 class_weight : dict或'balanced',默认值:无 与表单中的类关联的权重。如果没有给出,所有课程都应该有一个重量。 “平衡”模式使用y的值自动调整与输入数据中的类频率成反比的权重。 请注意,如果指定了sample_weight,这些权重将与sample_weight(通过fit方法传递)相乘。 版本0.17中的新功能:class_weight ='balanced' random_state : int,RandomState实例或None,可选,默认值:None 在随机数据混洗时使用的伪随机数生成器的种子。如果是int,则random_state是随机数生成器使用的种子; 如果是RandomState实例,则random_state是随机数生成器; 如果没有,随机数生成器所使用的RandomState实例np.random。在 solver : str,{'newton-cg','lbfgs','liblinear','sag','saga'},默认:'liblinear'。 用于优化问题的算法。
请注意,“sag”和“saga”快速收敛仅在具有大致相同比例的要素上得到保证。您可以使用sklearn.preprocessing中的缩放器预处理数据。 版本0.17中的新功能:随机平均梯度下降求解器。 版本0.19中的新功能: SAGA求解器。 版本0.20更改:默认值将在0.22中从“liblinear”更改为“lbfgs”。 max_iter : int,默认值:100 仅适用于newton-cg,sag和lbfgs求解器。求解器收敛的最大迭代次数。 multi_class : str,{'ovr','multinomial','auto'},默认值:'ovr' 如果选择的选项是'ovr',那么二进制问题适合每个标签。对于“多项式”,最小化的损失是整个概率分布中的多项式损失拟合,即使数据是二进制的。当solver ='liblinear'时,'multinomial'不可用。如果数据是二进制的,或者如果solver ='liblinear','auto'选择'ovr',否则选择'multinomial'。 版本0.18中的新功能: “多项式”案例的随机平均梯度下降求解器。 版本0.20更改:默认值将在0.22中从“ovr”更改为“auto”。 verbose : int,默认值:0 对于liblinear和lbfgs求解器,将verbose设置为任何正数以表示详细程度。 warm_start : bool,默认值:False 设置为True时,重用上一次调用的解决方案以适合初始化,否则,只需擦除以前的解决方案。对于liblinear解算器没用。请参阅词汇表。 版本0.17中的新功能:warm_start支持lbfgs,newton-cg,sag,saga求解器。 n_jobs : int或None,可选(默认=无) 如果multi_class ='ovr'“,则在对类进行并行化时使用的CPU核心数。 |
---|---|
属性: | coef_ : array,shape(1,n_features)或(n_classes,n_features) 决策函数中的特征系数。 coef_是形状(1,n_features)当给定的问题是二进制的。特别地,当multi_class ='multinomial'时,coef_对应于结果1(True)并且-coef_对应于结果0(False)。 intercept_ : array,shape(1,)或(n_classes,) 拦截(又名偏见)被添加到决策功能中。 如果fit_intercept设置为False,则截距设置为零。 当给定问题是二进制时,intercept_具有形状(1,)。特别是,当multi_class ='multinomial'时,intercept_ 对应于结果1(True),- intercept_对应于结果0(False)。 n_iter_ : 数组,形状(n_classes,)或(1,) 所有类的实际迭代次数。如果是二进制或多项式,则只返回1个元素。对于liblinear解算器,只给出了所有类的最大迭代次数。 版本0.20更改:在SciPy <= 1.0.0中,lbfgs迭代次数可能超过 |