逻辑回归常用于预测疾病发生的概率,例如因变量是是否恶性肿瘤,自变量是肿瘤的大小、位置、硬度、患者性别、年龄、职业等等(很多文章里举了这个例子,但现代医学发达,可以通过病理检查,即获取标本放到显微镜下观察是否恶变来判断);广告界中也常用于预测点击率或者转化率(cvr/ctr),例如因变量是是否点击,自变量是物料的长、宽、广告的位置、类型、用户的性别、爱好等等。
本章主要介绍逻辑回归算法推导、梯度下降法求最优值的推导及spark的源码实现。
一般回归问题的步骤是:
1. 寻找预测函数(h函数,hypothesis)
2. 构造损失函数(J函数)
3. 使损失函数最小,获得回归系数θ
而第三步中常见的算法有:
1. 梯度下降
2. 牛顿迭代算法
3. 拟牛顿迭代算法(BFGS算法和L-BFGS算法)
其中随机梯度下降和L-BFGS在spark mllib中已经实现,梯度下降是最简单和容易理解的。
构造预测函数
最大似然估计的思想
当从模型总体随机抽取m组样本观测值后,我们的目标是寻求最合理的参数估计θ′θ′使得从模型中抽取该m组样本观测值的概率最大。最大似然估计就是解决此类问题的方法。求最大似然函数的步骤是:
为什么第三步要取对数呢,因为取对数后,乘法就变成加法了,且单调性一致,不会改变极值的位置,后边就更好的求偏导。
推广到K元逻辑回归,即因变量为0、1、2、…、k-1。在二元逻辑回归中有这样的性质:
先看实例代码:
import org.apache.spark.SparkContext
import org.apache.spark.mllib.classification.{LogisticRegressionWithLBFGS, LogisticRegressionModel}
import org.apache.spark.mllib.evaluation.MulticlassMetrics
import org.apache.spark.mllib.regression.LabeledPoint
import org.apache.spark.mllib.linalg.Vectors
import org.apache.spark.mllib.util.MLUtils
// Load training data in LIBSVM format.
//样例数据格式:
//1 特征id1:值id1 特征id2:值id2 ...
//0 特征id1:值id3 特征id4:值id4 ...
//特征和特征对应的值都使用数值一一标示了
val data = MLUtils.loadLibSVMFile(sc, "data/mllib/sample_libsvm_data.txt")
// Split data into training (60%) and test (40%).
val splits = data.randomSplit(Array(0.6, 0.4), seed = 11L)
val training = splits(0).cache()
val test = splits(1)
// Run training algorithm to build the model
// 官方样例分类数设置为10,但样例数据因变量是0和1,所以这里应该时设置错了.
// 梯度下降法每次迭代都会变量整个样本集,推荐使用拟牛顿法LBFGS,后续文章中继续介绍
val model = new LogisticRegressionWithLBFGS()
.setNumClasses(10)
.run(training)
// Compute raw scores on the test set.
val predictionAndLabels = test.map { case LabeledPoint(label, features) =>
val prediction = model.predict(features)
(prediction, label)
}
// Get evaluation metrics.
val metrics = new MulticlassMetrics(predictionAndLabels)
val precision = metrics.precision
println("Precision = " + precision)
// Save and load model
// 输出是个模型,就是一个向量$\theta$,带入概率分布函数求得类型的概率
model.save(sc, "myModelPath")
val sameModel = LogisticRegressionModel.load(sc, "myModelPath")
随机梯度下降调用:
/**
* Train a logistic regression model given an RDD of (label, features) pairs. We run a fixed
* number of iterations of gradient descent using the specified step size. Each iteration uses
* `miniBatchFraction` fraction of the data to calculate the gradient. The weights used in
* gradient descent are initialized using the initial weights provided.
* NOTE: Labels used in Logistic Regression should be {0, 1}
*
* @param input RDD of (label, array of features) pairs.
* @param numIterations Number of iterations of gradient descent to run.迭代次数
* @param stepSize Step size to be used for each iteration of gradient descent.步长
* @param miniBatchFraction Fraction of data to be used per iteration.用于模型预估数据的比例
* @param initialWeights Initial set of weights to be used. Array should be equal in size to the number of features in the data.初始化权重
*/
@Since("1.0.0")
def train(
input: RDD[LabeledPoint],
numIterations: Int,
stepSize: Double,
miniBatchFraction: Double,
initialWeights: Vector): LogisticRegressionModel = {
new LogisticRegressionWithSGD(stepSize, numIterat2 Aions, 0.0, miniBatchFraction)
.run(input, initialWeights)
}
LogisticRegressionWithLBFGS和LogisticRegressionWithSGD都继承于GeneralizedLinearModel,它的run方法:
def run(input: RDD[LabeledPoint], initialWeights: Vector): M = {
if (numFeatures < 0) {
// 输入的特征数等于第一行特征个数.
numFeatures = input.map(_.features.size).first()
}
// 输入数据的存储类别.
if (input.getStorageLevel == StorageLevel.NONE) {
logWarning("The input data is not directly cached, which may hurt performance if its"
+ " parent RDDs are also uncached.")
}
// Check the data properties before running the optimizer
if (validateData && !validators.forall(func => func(input))) {
throw new SparkException("Input validation failed.")
}
/**
* Scaling columns to unit variance as a heuristic to reduce the condition number:
*
* During the optimization process, the convergence (rate) depends on the condition number of
* the training dataset. Scaling the variables often reduces this condition number
* heuristically, thus improving the convergence rate. Without reducing the condition number,
* some training datasets mixing the columns with different scales may not be able to converge.
*
* GLMNET and LIBSVM packages perform the scaling to reduce the condition number, and return
* the weights in the original scale.
* See page 9 in http://cran.r-project.org/web/packages/glmnet/glmnet.pdf
*
* Here, if useFeatureScaling is enabled, we will standardize the training features by dividing
* the variance of each column (without subtracting the mean), and train the model in the
* scaled space. Then we transform the coefficients from the scaled space to the original scale
* as GLMNET and LIBSVM do.
*通过每一列除以这一列的标准差,将数据标准化.LBFGS算法中可以启用.
* Currently, it's only enabled in LogisticRegressionWithLBFGS
*/
val scaler = if (useFeatureScaling) {
new StandardScaler(withStd = true, withMean = false).fit(input.map(_.features))
} else {
null
}
// Prepend an extra variable consisting of all 1.0's for the intercept.
// TODO: Apply feature scaling to the weight vector instead of input data.
// 默认是不加入截距项的
val data =
if (addIntercept) {
if (useFeatureScaling) {
input.map(lp => (lp.label, appendBias(scaler.transform(lp.features)))).cache()
} else {
input.map(lp => (lp.label, appendBias(lp.features))).cache()
}
} else {
if (useFeatureScaling) {
input.map(lp => (lp.label, scaler.transform(lp.features))).cache()
} else {
input.map(lp => (lp.label, lp.features))
}
}
/**
* TODO: For better convergence, in logistic regression, the intercepts should be computed
* from the prior probability distribution of the outcomes; for linear regression,
* the intercept should be set as the average of response.
*/
val initialWeightsWithIntercept = if (addIntercept && numOfLinearPredictor == 1) {
appendBias(initialWeights)
} else {
/** If `numOfLinearPredictor > 1`, initialWeights already contains intercepts. */
initialWeights
}
//SGD 或者 LBFGS算法
val weightsWithIntercept = optimizer.optimize(data, initialWeightsWithIntercept)
...
createModel(weights, intercept)
}
梯度下降SGD实现:
def runMiniBatchSGD(
data: RDD[(Double, Vector)],
gradient: Gradient,
updater: Updater,
stepSize: Double,
numIterations: Int,
regParam: Double,
miniBatchFraction: Double,
initialWeights: Vector,
convergenceTol: Double): (Vector, Array[Double]) = {
...
//不知道此数组干啥用的
val stochasticLossHistory = new ArrayBuffer[Double](numIterations)
...
// Initialize weights as a column vector
var weights = Vectors.dense(initialWeights.toArray)
val n = weights.size
/**
* For the first iteration, the regVal will be initialized as sum of weight squares
* if it's L2 updater; for L1 updater, the same logic is followed.
*/
var regVal = updater.compute(
weights, Vectors.zeros(weights.size), 0, 1, regParam)._2
var converged = false // indicates whether converged based on convergenceTol
var i = 1
while (!converged && i <= numIterations) {
val bcWeights = data.context.broadcast(weights)
// Sample a subset (fraction miniBatchFraction) of the total data
// compute and sum up the subgradients on this subset (this is one map-reduce)
val (gradientSum, lossSum, miniBatchSize) = data.sample(false, miniBatchFraction, 42 + i)
.treeAggregate((BDV.zeros[Double](n), 0.0, 0L))(
seqOp = (c, v) => {
// c: (grad, loss, count), v: (label, features)
// 返回损失loss,没看明白为何要算loss,及loss为何这么算log(1 + exp(margin))
// 主要目的时计算c._1梯度向量
val l = gradient.compute(v._2, v._1, bcWeights.value, Vectors.fromBreeze(c._1))
(c._1, c._2 + l, c._3 + 1)
},
combOp = (c1, c2) => {
// c: (grad, loss, count)
(c1._1 += c2._1, c1._2 + c2._2, c1._3 + c2._3)
})
if (miniBatchSize > 0) {
/**
* lossSum is computed using the weights from the previous iteration
* and regVal is the regularization value computed in the previous iteration as well.
*/
stochasticLossHistory.append(lossSum / miniBatchSize + regVal)
// 正则化
val update = updater.compute(
weights, Vectors.fromBreeze(gradientSum / miniBatchSize.toDouble),
stepSize, i, regParam)
weights = update._1
regVal = update._2
previousWeights = currentWeights
currentWeights = Some(weights)
if (previousWeights != None && currentWeights != None) {
converged = isConverged(previousWeights.get,
currentWeights.get, convergenceTol)
}
} else {
logWarning(s"Iteration ($i/$numIterations). The size of sampled batch is zero")
}
i += 1
}
logInfo("GradientDescent.runMiniBatchSGD finished. Last 10 stochastic losses %s".format(
stochasticLossHistory.takeRight(10).mkString(", ")))
(weights, stochasticLossHistory.toArray)
}
combOp是θθ的更新过程中的∑∑过程.在二元逻辑回归情况下:
case 2 =>
/**
* For Binary Logistic Regression.
*
* Although the loss and gradient calculation for multinomial one is more generalized,
* and multinomial one can also be used in binary case, we still implement a specialized
* binary version for performance reason.
*/
val margin = -1.0 * dot(data, weights)
val multiplier = (1.0 / (1.0 + math.exp(margin))) - label
axpy(multiplier, data, cumGradient)
if (label > 0) {
// The following is equivalent to log(1 + exp(margin)) but more numerically stable.
MLUtils.log1pExp(margin)
} else {
MLUtils.log1pExp(margin) - margin
}
margin 就是(−θTx−θTx),而multiplier就是hθ(xi)−yihθ(xi)−yi.axpy方法就是(hθ(xi)−yi))xi(hθ(xi)−yi))xi.