sklearn-loss function

我认为各个模型核心就是loss function,loss function不同就是不同的模型,具有相同的loss function大体也就相同,可能只是一个是分类,一个是回归,比如SVC和SVR之类,在参数上也大体相同。
整体来讲,loss function分为两部分,一部分是loss即预测值与实际值得差异loss,另一部分是正则项。


loss function

loss function是hinge对应SVM



意思是正确分类则为y=0,不正确是1-m(w)

image.png

SVM的loss function:


Squared Loss-回归
假设数据是高斯分布,最优拟合直线应该是使各点到回归直线的距离和最小的直线,即平方和最小。


image.png

log Loss - LR
假设数据是伯努利分布(0-1分布),就是利用已知的样本分布,找到最有可能(即最大概率)导致这种分布的参数值;或者说什么样的参数才能使我们观测到目前这组数据的概率最大)
L(Y,P(Y|X))=−logP(Y|X)
LR的loss function:


image.png

Exponentially Loss -Adaboost
标准形式是L(Y,f(X))=exp[−Yf(X)]
Adaboost的loss function


几种loss function图像对比:


几种loss function图像对比

在SGDClassifier对loss function有如下描述:
The possible options are ‘hinge’, ‘log’, ‘modified_huber’, ‘squared_hinge’, ‘perceptron’, or a regression loss: ‘squared_loss’, ‘huber’, ‘epsilon_insensitive’, or ‘squared_epsilon_insensitive’.

The ‘log’ loss gives logistic regression, a probabilistic classifier. ‘modified_huber’ is another smooth loss that brings tolerance to outliers as well as probability estimates. ‘squared_hinge’ is like hinge but is quadratically penalized. ‘perceptron’ is the linear loss used by the perceptron algorithm. The other losses are designed for regression but can be useful in classification as well; see SGDRegressor for a description.

参考:
https://blog.csdn.net/u010976453/article/details/78488279
http://scikit-learn.org/stable/modules/generated/sklearn.linear_model.SGDClassifier.html

你可能感兴趣的:(sklearn-loss function)