A simple explanation of the Lasso and Least Angle Regression

http://statweb.stanford.edu/~tibs/lasso/simple.html

http://statweb.stanford.edu/~tibs/lasso/

lasso (least absolute shrinkage and selection operator; also Lasso or LASSO)

输入: x1, x2 ...xp

输出: y

lasso要拟合下面的线性模型:

yhat=b0 + b1*x1+ b2*x2 + ... bp*xp 

同时 Minimize sum( (y-yhat)^2 ) subject to sum[absolute value(bj)] <= s

"s" is a tunIng parameter(tuning,调谐、调整),如果s被放得特别大,那么这个约束几乎就不起作用了,和普通最小二乘法没有什么分别了。

求解

lasso问题的求解可以用数值方法,但是最小角回归(least angle regression)方法更好。最小角回归看起来是逐步回归法(forward stepwise regression)的一个更加民主化的版本。

前项逐步回归

· 起初所有的bj均为0

· 找到与y最相关的xj(predictor),并加入到模型中,残差项为:r= y-yhat.

· 继续,每一阶段找到与r最相关的predictor加入到模型中

· 直到所有的predictor都在模型中

最小角回归的方法和上面说的前项逐步回归基本差不多,区别是,最小角回归doesn't add a predictor fully into the model. The coefficient of that predictor is increased only until that predictor is no longer the one most correlated with the residual r. Then some other competing predictor is invited to "join the club". 

最小角回归算法

· 起初所有的bj均为0 

· 找到与y最相关的xj(predictor)

· 增加bj的系数,沿着与y相关性的方向,与此同时计算残差residual r=y-yhat,直到有其他的predictor xk与r的相关性和xj与r的相关性一样

· 沿着最小二乘的方向增加(bj,bk),直到另一个predictor xm与残差项r相关系数更高

· Continue until: all predictors are in the model 

Surprisingly it can be shown that, with one modification, this procedure gives the entire path of lasso solutions, as s is varied from 0 to infinity. The modification needed is: if a non-zero coefficient hits zero, remove it from the active set of predictors and recompute the joint direction.

你可能感兴趣的:(A simple explanation of the Lasso and Least Angle Regression)