线性回归 弹性网络elastic net

from sklearn.linear_model import ElasticNet

ENreg = ElasticNet(alpha=1, l1_ratio=0.5, normalize=False)

ENreg.fit(x_train,y_train)

pred_cv = ENreg.predict(x_cv)

#calculating mse

mse = np.mean((pred_cv - y_cv)**2)

mse 1773750.73

ENreg.score(x_cv,y_cv)

0.4504

这里的R-2值比lasso还有ridge小得多

 Elastic regression generally works well when we have a big dataset.


弹性网络,既有L1惩罚项目,也有L2惩罚项

这个弹性网络里的网,就像渔网一样,如果你要抓鱼,一群鱼在一起组队游泳,撒网过去网了一组比较有相关性的鱼。也就是把那些互相不独立的变量放到一个组里头,Now if any one of the variable of this group is a strong predictor (meaning having a strong relationship with dependent variable), then we will include the entire group in the model building, because omitting other variables (like what we did in lasso) might result in losing some information in terms of interpretation ability, leading to a poor model performance.

Alpha = a + b           and     l1_ratio =  a / (a+b)

并且,a,b分别是L1和L2的系数:

a * (L1 term) + b* (L2 term)


令alpha (or a+b) = 1

1. 如果l1_ratio =1,从而就有a/1 = 1,从而a=1,b=0这就是一个lasso 惩罚项

2. 如果 l1_ratio = 0, 从而 a=0,从而b=1,这是一个 ridge 惩罚项。

3. 如果l1_ratio 处于 0 和 1之间,那么惩罚性是ridge 和 lasso的结合.


线性回归 弹性网络elastic net_第1张图片


线性回归 弹性网络elastic net_第2张图片

你可能感兴趣的:(线性回归 弹性网络elastic net)