Logistic回归

Logistic回归

  • 假设有一些数据点,我们用一条直线对这些点进行拟合(该线称为最佳拟合直线),这个拟合过程称作回归。
  • 训练分类器时的做法就是寻找最佳拟合参数,使用的是最优化算法。
  • 二值型输出分类器

Sigmoid函数

image.png
  • 为了实现Logistic回归分类器,在每个特征值上乘以一个回归系数,然后把所有值相加,将这个总和代入上述函数中,进而得到一个范围在0~1之间的数值。任何大于0.5的数据分为1类,小于0.5的归为0类。
  • Logistic回归也可以看成一种概率估计。
  • 输入记作z,z=wTx,w为系数向量。

梯度上升,随机梯度上升

  • 用来确定最佳回归系数
  • 实际就是计算偏导数,根据偏导数和步长来迭代。根据迭代次数或某个允许的误差范围来停止迭代。

Logistic回归梯度上升优化算法

from numpy import *

# testSet.txt的一行有三列用空格分割,前两个是特征,最后一个是分类值
def loadDataSet():
    dataMat = []; labelMat = []
    fr = open('testSet.txt')
    for line in fr.readlines():
        lineArr = line.strip().split()
        dataMat.append([1.0, float(lineArr[0]), float(lineArr[1])])
        labelMat.append(int(lineArr[2]))
    return dataMat,labelMat

def sigmoid(inX):
    return 1.0/(1+exp(-inX))

def gradAscent(dataMatIn, classLabels):
    dataMatrix = mat(dataMatIn)             #convert to NumPy matrix
    labelMat = mat(classLabels).transpose() #convert to NumPy matrix
    m,n = shape(dataMatrix)
    alpha = 0.001
    maxCycles = 500
    weights = ones((n,1))
    for k in range(maxCycles):              #heavy on matrix operations
        h = sigmoid(dataMatrix*weights)     #matrix mult
        error = (labelMat - h)              #vector subtraction
        weights = weights + alpha * dataMatrix.transpose()* error #matrix mult
    return weights

绘制logistic回归线

def plotBestFit(weights):
    import matplotlib.pyplot as plt
    dataMat,labelMat=loadDataSet()
    dataArr = array(dataMat)
    n = shape(dataArr)[0] 
    xcord1 = []
    ycord1 = []
    xcord2 = []
    ycord2 = []
    for i in range(n):
        if int(labelMat[i])== 1:
            xcord1.append(dataArr[i,1])
            ycord1.append(dataArr[i,2])
        else:
            xcord2.append(dataArr[i,1])
            ycord2.append(dataArr[i,2])
    fig = plt.figure()
    ax = fig.add_subplot(111)
    ax.scatter(xcord1, ycord1, s=30, c='red', marker='s')
    ax.scatter(xcord2, ycord2, s=30, c='green')
    x = arange(-3.0, 3.0, 0.1)
    y = (-weights[0]-weights[1]*x)/weights[2]
    ax.plot(x, y)
    plt.xlabel('X1'); plt.ylabel('X2');
    plt.show()

改进的随机梯度上升算法

def stocGradAscent1(dataMatrix, classLabels, numIter=150):
    m,n = shape(dataMatrix)
    weights = ones(n)   #initialize to all ones
    for j in range(numIter):
        dataIndex = range(m)
        for i in range(m):
            alpha = 4/(1.0+j+i)+0.0001    #apha decreases with iteration, does not 
            randIndex = int(random.uniform(0,len(dataIndex)))#go to 0 because of the constant
            h = sigmoid(sum(dataMatrix[randIndex]*weights))
            error = classLabels[randIndex] - h
            weights = weights + alpha * error * dataMatrix[randIndex]
            del(dataIndex[randIndex])
    return weights

你可能感兴趣的:(Logistic回归)