sklearn(三):线性回归的改进——岭回归

1.介绍

对于线性回归,为了增加模型的泛化能力,我们可以采用岭回归(Ridge Regression)而引入正则项。当然,最好的办法还是增加训练样本,之后我们会看到增大样本后的效果。为了方便以后使用,我们这次按照面向对象思想将岭回归模型进行封装(class Ridge_Regression)。

2.样本生成

与之前一样,我们采用sklearn的datasets模块产生数据,我们将其定义在init方法内。

def __init__(
        self,
        n_samples=50, 
        n_features=10,
        n_informative=2,
        n_targets=5,
        noise=30,
        bias=10,
        random_state=1,
        x=None,
        y=None
):
    if (x is None) and (y is None):
        self.X, self.y = datasets.make_regression(
            n_samples=n_samples,
            n_features=n_features,
            n_informative=n_informative,
            n_targets=n_targets,
            noise=noise,
            bias=bias,
            random_state=random_state
        )
        self.y = self.y.reshape(n_samples, n_targets)
    elif (x is not None) and (y is not None):
        self.X = np.array(x)
        self.y = np.array(y)
    else:
        raise Exception("Input is invalid!")

3.数据预处理

将样本打乱顺序,同时分隔数据集。

def preprocess(self, proportion=0.8):
    n = self.X.shape[0]    # training samples
    n_train = int(n * proportion)
    permutation = np.random.permutation(n)
    self.X, self.y = self.X[permutation], self.y[permutation]
    self.X_train, self.y_train = self.X[:n_train, :], self.y[:n_train, :]
    self.X_test, self.y_test = self.X[n_train:, :], self.y[n_train:, :]
    return self.X_train, self.y_train, self.X_test, self.y_test

4.模型训练

这里关注alpha参数,表示正则系数。alpha越大,正则惩罚越厉害,alpha=0就是普通的线性回归模型。

def train(self, alpha=0.01, fit_intercept=True, max_iter=10000):
    self.ridge_regressor = linear_model.Ridge(alpha=alpha, fit_intercept=fit_intercept,
                                              max_iter=max_iter)
    self.ridge_regressor.fit(self.X_train, self.y_train)

5.预测与评价

与之前线性回归一样,不多讲。

def predict(self, X):
    return self.ridge_regressor.predict(X)

def loss(self, X, y):
    return round(sm.mean_squared_error(X, y), 4)

def variance(self, X, y):
    return round(sm.explained_variance_score(X, y), 4)

6.结果

我们对alpha取0.01, 0.1, 0.5进行测试。

if __name__ == "__main__":
ridge_regressor = Ridge_Regression()
X_train, y_train, X_test, y_test = ridge_regressor.preprocess(proportion=0.75)
for alpha in [0.01, 0.1, 0.5]:
    ridge_regressor.train(alpha=alpha)
    y_predict_test = ridge_regressor.predict(X_test)
    print("alpha = {}, test loss: {}".format(round(alpha, 2), ridge_regressor.loss(y_test, y_predict_test)))
    print("alpha = {}, variance: {}\n".format(round(alpha, 2), ridge_regressor.variance(y_test, y_predict_test)))

输出为:

# n_samples = 50
alpha = 0.01, test loss: 1821.4612
alpha = 0.01, variance: 0.1471

alpha = 0.1, test loss: 1796.6773
alpha = 0.1, variance: 0.1571

alpha = 0.5, test loss: 1701.2975
alpha = 0.5, variance: 0.1966

可以看到,增大alpha,test loss有减少,但是效果并不好,因为我们的variance得分很低,证明方差很大。下面,我们将样本数量从50,增大到5000,看看效果。

# n_samples = 5000
alpha = 0.01, test loss: 905.622
alpha = 0.01, variance: 0.6951

alpha = 0.1, test loss: 905.6208
alpha = 0.1, variance: 0.6951

alpha = 0.5, test loss: 905.6158
alpha = 0.5, variance: 0.6951

可以看到,loss有明显下降,同时variance得分明显提高,这个效果要比ridge regression本身效果要好(也可能是我数据产生的不太好,没太发挥出来ridge regression模型的效果)。但是,增大样本,毫无疑问是提示模型效果的关键手段。

你可能感兴趣的:(sklearn(三):线性回归的改进——岭回归)