Resampling methods involve repeatedly drawing samples from a training set and refitting a model of interest on each sample in order to obtain additional information about the fitted model. (e.g. cross-validation, bootstrap)
The training error rate often is quite different from the test error rate, and in particular the former can dramatically underestimate the latter.
Model Complexity Low: High bias, Low variance
Model Complexity High: Low bias, High variance
Prediction Error Estimates
A random splitting into two halves: left part is training set, right part is validation set.
LOOCV involves splitting the set of observations into two parts. However, instead of creating two subsets of comparable size, a single observation ( x 1 , y 1 ) (x_1 , y_1 ) (x1,y1) is used for the validation set, and the remaining observations ( x 2 , y 2 ) , . . . , ( x n , y n ) { (x_2 , y_2 ), . . . , (x_n , y_n ) } (x2,y2),...,(xn,yn) make up the training set.
C V ( n ) = 1 n ∑ i = 1 n ( y i − y i ^ 1 − h i ) 2 CV_{(n)}=\frac{1}{n}\sum^n_{i=1}(\frac{y_i-\hat{y_i}}{1-h_i})^2 CV(n)=n1i=1∑n(1−hiyi−yi^)2
This approach involves randomly dividing the set of observations into k groups, or folds, of approximately equal size. The first fold is treated as a validation set, and the method is fit on the remaining k − 1 folds. This procedure is repeated k times; each time, a different group of observations is treated as a validation set. This process results in k estimates of the test error. The k-fold CV estimate is computed by averaging these values. If k=n, then it is LOOCV.
C V ( k ) = 1 k ∑ i = 1 k M S E o r C V ( k ) = 1 k ∑ i = 1 k E r r k CV_{(k)}=\frac{1}{k}\sum^k_{i=1}{MSE}\\or\\ CV_{(k)}=\frac{1}{k}\sum^k_{i=1}{Err_k} CV(k)=k1i=1∑kMSEorCV(k)=k1i=1∑kErrk
Typically, given these considerations, one performs k-fold cross-validation using k = 5 or k = 10, as these values have been shown empirically to yield test error rate estimates that suffer neither from excessively high bias nor from very high variance.
Obtain datasets ( n n n observations) by repeatedly sampling from the original data set Z Z Z with replacement B B B times.
Each of these bootstrap data, denoted as Z ∗ 1 , . . . , Z ∗ B Z^{*1},..., Z^{*B} Z∗1,...,Z∗B, is the same size as original dataset n n n. And bootstrap estimates for α \alpha α denoted as α ^ ∗ 1 , . . . , α ^ ∗ B \hat{\alpha}^{*1},..., \hat{\alpha}^{*B} α^∗1,...,α^∗B. Thus some observations may appear more than once and some not at all (2/3 of original dataset).
S E B ( θ ^ ) = 1 B − 1 ∑ r = 1 B ( θ ^ ∗ r − θ ˉ ∗ ) 2 SE_B(\hat{\theta})=\sqrt{\frac{1}{B-1}\sum^B_{r=1}(\hat{\theta}^{*r}-\bar{\theta}^*)^2} SEB(θ^)=B−11r=1∑B(θ^∗r−θˉ∗)2
[ L , U ] = [ θ ^ α / 2 ∗ , θ ^ 1 − α / 2 ∗ ] [L,U]=[\hat{\theta}^*_{\alpha/2}, \hat{\theta}^*_{1-\alpha/2}] [L,U]=[θ^α/2∗,θ^1−α/2∗]
[ L , U ] = θ ˉ ± z 1 − α / 2 × S E ∗ B [L,U]=\bar{\theta}\pm z_{1-\alpha/2}\times\frac{SE^*}{B} [L,U]=θˉ±z1−α/2×BSE∗
[ L , U ] = [ 2 θ ^ − θ 1 − α / 2 ∗ , 2 θ ^ − θ α / 2 ∗ ] [L,U]=[2\hat{\theta}-\theta^*_{1-\alpha/2}, 2\hat{\theta}-\theta^*_{\alpha/2}] [L,U]=[2θ^−θ1−α/2∗,2θ^−θα/2∗]
Key: the behavior of θ ^ ∗ − θ ^ \hat{\theta}^*-\hat{\theta} θ^∗−θ^ is approximately the same as the behavior of θ ^ − θ \hat{\theta}-\theta θ^−θ.
Therefore:
0.95 = P ( θ ^ α / 2 ∗ ≤ θ ^ ∗ ≤ θ ^ 1 − α / 2 ∗ ) = P ( θ ^ α / 2 ∗ − θ ^ ≤ θ ^ ∗ − θ ^ ≤ θ ^ 1 − α / 2 ∗ − θ ^ ) = P ( θ ^ α / 2 ∗ − θ ^ ≤ θ ^ ∗ − θ ≤ θ ^ 1 − α / 2 ∗ − θ ^ ) = P ( θ ^ α / 2 ∗ − θ ^ ≤ θ ^ − θ ≤ θ ^ 1 − α / 2 ∗ − θ ^ ) = P ( 2 θ ^ − θ 1 − α / 2 ∗ ≤ θ ≤ 2 θ ^ − θ α / 2 ∗ ) 0.95 = P(\hat{\theta}^*_{\alpha/2}\le\hat{\theta}^*\le\hat{\theta}^*_{1-\alpha/2}) \\ = P(\hat{\theta}^*_{\alpha/2}-\hat{\theta}\le\hat{\theta}^*-\hat{\theta}\le\hat{\theta}^*_{1-\alpha/2}-\hat{\theta}) \\ = P(\hat{\theta}^*_{\alpha/2}-\hat{\theta}\le\hat{\theta}^*-\theta\le\hat{\theta}^*_{1-\alpha/2}-\hat{\theta}) \\ = P(\hat{\theta}^*_{\alpha/2}-\hat{\theta}\le\hat{\theta}-\theta\le\hat{\theta}^*_{1-\alpha/2}-\hat{\theta}) \\ = P(2\hat{\theta}-\theta^*_{1-\alpha/2}\le\theta\le2\hat{\theta}-\theta^*_{\alpha/2}) 0.95=P(θ^α/2∗≤θ^∗≤θ^1−α/2∗)=P(θ^α/2∗−θ^≤θ^∗−θ^≤θ^1−α/2∗−θ^)=P(θ^α/2∗−θ^≤θ^∗−θ≤θ^1−α/2∗−θ^)=P(θ^α/2∗−θ^≤θ^−θ≤θ^1−α/2∗−θ^)=P(2θ^−θ1−α/2∗≤θ≤2θ^−θα/2∗)
Each bootstrap sample has significant overlap with the original data. This will cause the bootstrap to seriously underestimate the true prediction error.
If the data is a time series, we can’t simply sample the observations with replacement. We can instead create blocks of consecutive observations, and samp le those with replacements. Then we paste to gether sampled blocks to obtain a bootstrap samples.
Y i = β 0 + β 1 X i + ϵ i , i = 1 , . . . , n Y_i=\beta_0+\beta_1X_i+\epsilon_i,\ i=1,...,n Yi=β0+β1Xi+ϵi, i=1,...,n
Find S.E. and C.I. for β 0 \beta_0 β0 and β 1 \beta_1 β1
Resampling ( X 1 , Y 1 ) , . . . , ( X n , Y n ) (X_1, Y_1), ..., (X_n, Y_n) (X1,Y1),...,(Xn,Yn) and obtain:
For each Bootstrap sample, fit regression and obtain ( β ^ 0 ∗ 1 , β ^ 1 ∗ 1 ) . . . ( β ^ 0 ∗ B , β ^ 1 ∗ B ) (\hat{\beta}_0^{*1},\hat{\beta}_1^{*1})...(\hat{\beta}_0^{*B},\hat{\beta}_1^{*B}) (β^0∗1,β^1∗1)...(β^0∗B,β^1∗B), then estimate S.E. and C.I.
Recall that residuals to mimic the role of ϵ \epsilon ϵ.
Bootstrap the residuals and obtain:
Generate new bootstrap sample: X i ∗ b = X i , Y i ∗ b = β ^ 0 + β ^ 1 X i + e ^ i ∗ b X_i^{*b}=X_i,\ Y_i^{*b}=\hat{\beta}_0+\hat{\beta}_1X_i+\hat{e}_i^{*b} Xi∗b=Xi, Yi∗b=β^0+β^1Xi+e^i∗b
For each bootstrap sample, fit regression and estimate S.E. and C.I.
When variance of error V a r ( ϵ i ∣ X i ) Var(\epsilon_i|X_i) Var(ϵi∣Xi) depends on the value of X i X_i Xi ( so called heteroskedasticity) , residual bootstrap is unstable because the residual bootstrap will swap all the residuals regardless of the value of X. But wild bootstrap uses the residual of itself only.
Generate IID random variables V 1 b , . . . , V n b ∼ N ( 0 , 1 ) V_1^b,...,V_n^b \sim N(0,1) V1b,...,Vnb∼N(0,1)
Generate new bootstrap sample: X i ∗ b = X i , Y i ∗ b = β ^ 0 + β ^ 1 X i + V i b e ^ i X_i^{*b}=X_i,\ Y_i^{*b}=\hat{\beta}_0+\hat{\beta}_1X_i+V_i^b\hat{e}_i Xi∗b=Xi, Yi∗b=β^0+β^1Xi+Vibe^i
For each bootstrap sample, fit regression and estimate S.E. and C.I.