学习笔记:The Elements of Statistical Learning (Chapter 2)

2.1 Introduction

The main goal of unsupervised learning is to predict the Output(responses) via Input(predictors).

2.2 Variable Types and Terminology

Overall, there are mainly two kinds of outputs: quantitative outputs() and qualitative outputs().

For , it means that sime measurements are bigger than others, and the measurements close in value means the variables are close in nature. To predict quantitative outputs, we name the methods as regression.

For , it means that the values lay in a finite set and there is no explicit ordering among them. To code them, we can use dummy variables or other methods.To predictict qualitative outputs, we name the methods as classification.

There are also other kind of outputs, such as ordered categorical outputs.

Given the data

which is a matrix, our task is to predict , or .
We have a set of training data:

2.3 Two Simple Approaches to Prediction: Least Squares and Nearesr Neighbors.

2.3.1 Linear Models and Least Squares

Input: , predict Y via (intercept included).
The function is a linear function.

To fit this model, we have

After minimizing it by taking derivatives, we get

2.3.2 Nearest Neighbor Methods

We find k observations with closest to x in input space and average their responses by


where is the neighborhood of defined by the closest points in the training sample.

2.3.3 From Least Square to Nearest Neighbors

First we consider the complexity. To fit the least square model, we need to fit parameters, while for k-NN, a single parameter should be fit. However, for k-NN, if the neighborhoods were non-overlapping, there would be neighborhoods and we would fit one parameter(a mean) in each neighborhood, and is generally bigger than .

From bias and variance aspects, generally, linear models has low variance and potnetially high bias, while k-NN is wiggly and unstable with high variance and low bias.

Considering where the constructed data came from, we have the two possible scenarios:

Scenario 1 : The training data in each class were generated from bivariate Gaussian distributions with uncorrelated components and different means.

Scenario 2 : The training data in each class came from a mixture of 10 lowvariance Gaussian distributions, with individual means themselves distributed as Gaussian.

For Scenario 1, linear decision boundary is the best one[See Chapter 4]. For 2, other methods can be better, such as k-NN.

Each method has its own situations for which it works best; in particular linear regression is more appropriate for Scenario 1 above, while nearest neighbors are more suitable for Scenario 2. The time has come to expose the oracle!

2.4 Statistical Decision Theory

2.4.1 Theory for Quantitative Output

With joint probability ,to find , such that predicts , we have the EPE(expected prediction error) as follows:


where here is the loss function. If , we have

To minimize it, let , then compute

and get .
Thus, we know .

2.4.2 Implementation to k-NN and Linear Model

The nearest-neighbor methods attempt to directly implement this recipe using the training data. At each point , we might ask for the average of all those s with input . Since there is typically at most one observation
at any point , we settle for

where "Ave" denotes average,and is the neighborhood containing
the points in closest to . Two approximations are happening here:

expectation is approximated by averaging over sample data;

conditioning at a point is relaxed to conditioning on some region "close" to the target point.

Fact With joint probability , as and . However, as dimension increases, the approximations will get worse.

For linear model, we assume the function of the form
, and then plug it in the definition of EPE, we have

After minimizing it, we have

Note: we do not need it conditioned on .

So both k-nearest neighbors and least squares end up approximating conditional expectations by averages. But they differ dramatically in terms of model assumptions:

Least squares assumes is well approximated by a globally linear function.

k-nearest neighbors assumes is well approximated by a locally constant function.

2.4.3 Bayes Classifier

For outputs which are categorical variable , the loss function can be represented as , where . is the price paid for classifying belonging to as.(0 on the diagonal and non-negative elsewhere)

More commonly, we use zero-one loss function which can be represented as


which is 0 on the diagonal and 1 elsewhere.

With , we have


With 0-1 loss function, we can minimize it point-wisely, which is known as Bayes-optimal classifier.

The error rate of this classifier is called Bayes rate.

2.5 Local Methods in High Dimensions.

In this section, it is explained that why k-NN will lead to worse results as dimension increases as well as why linear model can be more stable when rigid assumptions are satisfied.

There are three main points listed to explain why k-NN will fail when is large:

(1) To capture proportion of data to form a local average, we need to cover more of the range of each input variable, as increases.

For example, for a hypercube, proportion of each dimension means in the whole input space. So, if we need proportion data of the whole input space, data on each dimension will be needed. As increases, will also increase converging to .

(2) Sampling density decreases exponentially, when increases.

(3) Average distance from other observations increase, and thus, the estimate tends to be 0 more.

For example, consider data points uniformly distributed in a -dimensional unit ball centered at the origin. Suppose we consider a nearest-neighboor estimate at the origin, The median distance from the origin to the closest data point is give by

For more than
halfway to the boundary. Hence most data points are closer to the boundary of the sample space than to any other data point. An intuitive understanding to this example is similar to the example of the first point.

Now, we discuss why linear model can be more robust to if the linear assumptions hold.

The linear assumption:

Now, we have


As is large enough, is selected randomly,assume , , we have

This is linearly proportional to . As is large and is small, the growth of it can be ignored when is increasing.

Overall, By relying on rigid assumptions, the linear model has no bias at all and negligible variance, while the error in 1-nearest neighbor is substantially larger. However, if the assumptions are wrong, all bets are off and the 1-nearest neighbor may dominate.

2.6 Statistical Models, Supervised Learning and Function Approximation

To overcome the dimensionality problems,we anticipate using other classes of models for . Now, we discuss a frame work for incorporating them into the prediction problem.

2.6.1 A Statistical Model for the Joint Distribution

Suppose our data arose from a statistical model


where the random error has and is independent of .

Note that for this model, , and in fact the conditional distribution depends on only through the conditional mean .

This additive error model is typically not used for qualitative outputs . In this case, the target function is the conditional density , and is modeled directly.

2.6.2 Supervised Learning

In this section, the task of supervised learning is briefly introduced. That is to learn by example through a teacher.

2.6.3 Function Approximation

The function has domain equal to the -dimensional input subspace, and is related to the data via a model such as . For convenience, in this chapter we will assume the domain is , a -dimensional Euclidean space, although in general the inputs can be of mixed type.

Note: many of the approximations we will encounter as associated a set of parameters that can be modified to suit the data at hand.

For additive error models, RSS seems to be a reasonalbe criterion. However, least square is not the only criterion used and in some cases it would not make much sense, although it is generally convenient.

A more general principle for estimation is maximum likelihood estimation.

2.7 Structured Regression Models

First we discuss the difficulty of the problem. Consider

minimizing it without any constrain leads to infinitely many solutions: any functions passing all the training points. And the results turn out to be useless. Thus, we need some constrains of when finding the function .

In general the constraints imposed by most learning methods can be described as complexity restrictions of one kind or another. This usually means some kind of regular behavior in small neighborhoods of the input space. That is, for all input points sufficiently close to each other in some metric, exhibits some special structure such as nearly constant, linear or low-order polynomial behavior. The estimator is then obtained by averaging or polynomial fitting in that neighborhood.

The strength of the constraint is dictated by the neighborhood size. The larger the size of the neighborhood, the stronger the constraint, and the more sensitive the solution is to the particular choice of constraint. For example, local constant fits in infinitesimally small neighborhoods is no constraint at all; local linear fits in very large neighborhoods is almost a globally linear model, and is very restrictive.

The nature of the constraint depends on the metric used. Some methods, such as kernel and local regression and tree-based methods, directly specify the metric and size of the neighborhood. The nearest-neighbor methods discussed so far are based on the assumption that locally the function is constant; close to a target input , the function does not change much, and
so close outputs can be averaged to produce . Other methods such as splines, neural networks and basis-function methods implicitly define neighborhoods of local behavior.

One fact should be clear by now. Any method that attempts to produce locally varying functions in small isotropic neighborhoods will run into problems in high dimensions --- again the curse of dimensionality.

2.8 Classes of Restricted Estimators

The variety of nonparametric regression techniques or learning methods fall into a number of different classes depending on the nature of the restrictions imposed. Here three croad classes are described.

2.8.1 Roughness Penalty and Bayesian Methods

Here the class of functions is controlled by explicitly penalizing with a roughness penalty

The user-seected penalty here controls some property of .It will be large for functions that vary too rapidly over small regions of input space. The amount of penalty is dictated by . For , no penalty is imposed, and any interpolating function will do.

Penalty function, or regularization methods, express our prior belief that the type of functions we seek exhibit a certain type of smooth behavior, and indeed can usually be cast in a Bayesian framework.

2.8.2 Kernel Methods and Local Regression

These methods can be thought of as explicitly providing estimates of the regression function or conditional expectation by specifying the nature of the local neighborhood, and of the class of regular functions fitted locally. The local neighborhood is specified by a kernel function which assigns weights to points in a region around .

In general, we can define a local regression estimate of as , where minimizes

and is some parameterized function, such as low-ordered polynimial.

Note: basically, is larger when is closer to . It means that the points in the neighborhood of are considered to be more important.

2.8.3 Basis Functions and Dictionary Methods

Here the model of function is a linear expansion of basis functions

where each of the is a function of the input , and the term linear here refers to the action of the parameters . In some cases, the sequence of basis functions is prescribed, such as a basis for polynomials in of total degree .

The adaptively chosen basis function methods are known as dictionary methods, where one has available a possibly infinite set or dictionary of candidate basis functions from which to choose, and models are built up by employing some kind of search mechanism.

2.9 Model Selection and the Bias-Variance Tradeoff

For the three classes drescribed in the last section, we need to determine:

the multiplier of the penalty term;

the width of the kernel;

or the number of basis functions.

The k-NN regression fit usefully illustrates the competing forces that affect the predictive ability of such approximations. Suppose the data arise from a model , with and . For simplicity here we assume that the values of in the sample are fixed in advance. The expected prediction error at can be decomposed:

\begin{aligned} EPE_k(x_0)&=E[(Y-\hat{f_k}(x_0))^2|X=x_0]\\ &=\sigma^2+[Bias^2(\hat{f_k}(x_0))+Var_{\mathcal{T}}(\hat{f_k}(x_0))]\\ &=\sigma^2+[f(x_0)-\frac{1}{k}\sum_{l=1}^kf(x_{(l)})]^2+\frac{\sigma^2}{k}. \end{aligned}

The first term is beyond our control, even if we know the true .

The second term will likely increase with , if the true function is reasonably smooth.

The second term is simply the variance of an average here, and decreases as teh inverse of .

Thus, as varies, there is a bias-variance tradeoff.

More generally, as the model complexity of our procedure is increased, the variance tends to increase and the squared bias tends to decrease. The opposite behavior occurs as the model complexity is decreased. For k-nearest neighbors, the model complexity is controlled by .

你可能感兴趣的:(学习笔记:The Elements of Statistical Learning (Chapter 2))