逐步回归就是从自变量x中挑选出对y有显著影响的变量,已达到最优
用step()函数
导入数据集
cement<-data.frame(
X1=c( 7, 1, 11, 11, 7, 11, 3, 1, 2, 21, 1, 11, 10),
X2=c(26, 29, 56, 31, 52, 55, 71, 31, 54, 47, 40, 66, 68),
X3=c( 6, 15, 8, 8, 6, 9, 17, 22, 18, 4, 23, 9, 8),
X4=c(60, 52, 20, 47, 33, 22, 6, 44, 22, 26, 34, 12, 12),
Y =c(78.5, 74.3, 104.3, 87.6, 95.9, 109.2, 102.7, 72.5,
93.1,115.9, 83.8, 113.3, 109.4)
)
> lm.sol<-lm(Y ~ X1+X2+X3+X4, data=cement)
> summary(lm.sol)
Call:
lm(formula = Y ~ X1 + X2 + X3 + X4, data = cement)
Residuals:
Min 1Q Median 3Q Max
-3.1750 -1.6709 0.2508 1.3783 3.9254
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 62.4054 70.0710 0.891 0.3991
X1 1.5511 0.7448 2.083 0.0708 .
X2 0.5102 0.7238 0.705 0.5009
X3 0.1019 0.7547 0.135 0.8959
X4 -0.1441 0.7091 -0.203 0.8441
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Residual standard error: 2.446 on 8 degrees of freedom
Multiple R-squared: 0.9824, Adjusted R-squared: 0.9736
F-statistic: 111.5 on 4 and 8 DF, p-value: 4.756e-07
可以看出效果不明显
所以要进行逐步回归进行变量的筛选有forward:向前,backward:向后,both:2边,默认情况both
lm.step<-step(lm.sol)
Start: AIC=26.94
Y ~ X1 + X2 + X3 + X4
Df Sum of Sq RSS AIC
- X3 1 0.1091 47.973 24.974
- X4 1 0.2470 48.111 25.011
- X2 1 2.9725 50.836 25.728
- X1 1 25.9509 73.815 30.576
Step: AIC=24.97
Y ~ X1 + X2 + X4
Df Sum of Sq RSS AIC
- X4 1 9.93 57.90 25.420
- X2 1 26.79 74.76 28.742
- X1 1 820.91 868.88 60.629
> lm.step$anova
Step Df Deviance Resid. Df Resid. Dev AIC
1 NA NA 8 47.86364 26.94429
2 - X3 1 0.10909 9 47.97273 24.97388
显然去掉X3会降低AIC
此时step()函数会帮助我们自动去掉X3
summary(lm.step)
Call:
lm(formula = Y ~ X1 + X2 + X4, data = cement)
Residuals:
Min 1Q Median 3Q Max
-3.0919 -1.8016 0.2562 1.2818 3.8982
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 71.6483 14.1424 5.066 0.000675 ***
X1 1.4519 0.1170 12.410 5.78e-07 ***
X2 0.4161 0.1856 2.242 0.051687 .
X4 -0.2365 0.1733 -1.365 0.205395
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Residual standard error: 2.309 on 9 degrees of freedom
Multiple R-squared: 0.9823, Adjusted R-squared: 0.9764
F-statistic: 166.8 on 3 and 9 DF, p-value: 3.323e-08
很显然X2和X4效果不好
可以用add1()和drop1()函数进行增减删除函数
> drop1(lm.step)
Single term deletions
Model:
Y ~ X1 + X2 + X4
Df Sum of Sq RSS AIC
X1 1 820.91 868.88 60.629
X2 1 26.79 74.76 28.742
X4 1 9.93 57.90 25.420
我们知道除了AIC标准外,残差和也是重要标准,除去x4后残差和变为9.93
更新式子
> lm.opt<-lm(Y ~ X1+X2, data=cement)
> summary(lm.opt)
Call:
lm(formula = Y ~ X1 + X2, data = cement)
Residuals:
Min 1Q Median 3Q Max
-2.893 -1.574 -1.302 1.363 4.048
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 52.57735 2.28617 23.00 5.46e-10 ***
X1 1.46831 0.12130 12.11 2.69e-07 ***
X2 0.66225 0.04585 14.44 5.03e-08 ***
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Residual standard error: 2.406 on 10 degrees of freedom
Multiple R-squared: 0.9787, Adjusted R-squared: 0.9744
F-statistic: 229.5 on 2 and 10 DF, p-value: 4.407e-09
显然效果很好