机器学习项目实战——预测学生是否被录取

  • 模块
    • sigmoid函数
    • 预测函数
    • 损失函数
    • 梯度计算
    • 梯度下降
    • 精度
  • 完整代码
  • 附录

建立一个逻辑回归模型来预测一个学生是否被录取。
数据:LogiReg_data.txt

模块

  • sigmoid: 映射到概率的函数
  • model: 返回预测结果值(预测函数)
  • cost: 根据参数计算损失
  • gradient: 计算每个参数的梯度方向
  • descent: 梯度下降,参数更新
  • accuracy: 计算精度

 

sigmoid函数

g ( z ) = 1 1 + e − z g(z) = \frac{1}{1 + e^{-z}} g(z)=1+ez1

def sigmoid(z):
	"""sigmoid函数"""
	return 1 / (1 + np.exp(-z))

 

预测函数

h θ ( x ) = g ( θ T x ) = 1 1 + e − θ T x h_\theta(x) = g(\theta^Tx) = \frac{1}{1 + e^{-\theta^Tx}} hθ(x)=g(θTx)=1+eθTx1

def model(X, theta):
	"""预测函数"""
	return sigmoid(np.dot(X, theta.T))

 

损失函数

损失函数,将对数似然函数去负号

D ( h θ ( x ) , y ) = − y log ⁡ ( h θ ( x ) ) − ( 1 − y ) log ⁡ ( 1 − h θ ( x ) ) D(h_\theta(x), y) = -y\log(h_\theta(x)) - (1-y)\log(1-h_\theta(x)) D(hθ(x),y)=ylog(hθ(x))(1y)log(1hθ(x))

求平均损失

J ( θ ) = 1 n ∑ i = 1 n D ( h θ ( x i ) , y i ) J(\theta) = \frac{1}{n}\sum\limits_{i=1}^{n}D(h_\theta(x^i), y^i) J(θ)=n1i=1nD(hθ(xi),yi)

def cost(X, y, theta):
	"""损失函数"""
	left = np.multiply(-y, np.log(model(X, theta)))
	right = np.multiply(1 - y, np.log(1 - model(X, theta)))
	return np.sum(left - right) / (len(X))

 

梯度计算

即求偏导

∂ J ∂ θ j = − 1 m ∑ i = 1 n ( y i − h θ ( x i ) ) x j i \frac{\partial J}{\partial \theta_j} = - \frac{1}{m}\sum\limits_{i=1}^n (y^i-h_\theta(x^i))x^i_j θjJ=m1i=1n(yihθ(xi))xji

def gradient(X, y, theta):
	"""梯度函数"""
	# 梯度和θ参数一一对应
	grad = np.zeros(theta.shape)
	error = (model(X, theta) - y).ravel()
	for j in range(len(theta.ravel())):
		term = np.multiply(error, X[:, j])  # 都是一维
		grad[0, j] = np.sum(term) / len(X)

	return grad

 

梯度下降

import numpy.random
def shuffleData(data):
	"""洗牌,打乱数据"""
	np.random.shuffle(data)
	cols = data.shape[1]
	X = data[:, 0:cols-1]
	y = data[:, cols-1:]
	return X, y
import time

def descent(data, batchSize, stopType, thresh, alpha):
    """梯度下降求解"""
    
    # 初始化
    init_time = time.time()  # 起始时间
    i = 0  # 迭代次数
    k = 0  # batch
    X, y = shuffleData(data)  # 洗牌
    theta = np.zeros([1, X.shape[1]])  # θ参数
    costs = [cost(X, y, theta)]  # 损失值

    while True:
        grad = gradient(X[k: k+batchSize], y[k: k+batchSize], theta)
        k += batchSize
        if k >= n:
            k = 0
            X, y = shuffleData(data)  # 重新洗牌
        theta = theta - alpha * grad  # 参数更新
        costs.append(cost(X, y, theta))  # 计算新的损失
        i += 1
                
        # 三种停止策略
        if (
            (stopType == 'iter' and i > thresh) or
            (stopType == 'cost' and abs(costs[-1] - costs[-2]) < thresh) or
            (stopType == 'grad' and np.linalg.norm(grad) < threshold)
        ):
            break
    
    return theta, i-1, costs, grad, time.time()-init_time
def runExpe(data, batchSize, stopType, thresh, alpha):
    theta, iter, costs, grad, dur = descent(data, batchSize, stopType, thresh, alpha)
   
    # 以下可视化处理
    name = "Original" if (data[:, 1] > 2).sum() > 0 else "Scaled"
    name += " data - learning rate: {} - ".format(alpha)

    if batchSize == n:
        strDescType = "Gradient"
    elif batchSize == 1:
        strDescType = "Stochastic"
    else:
        strDescType = "Mini-batch ({})".format(batchSize)
    name += strDescType + " descent - Stop: "

    if stopType == 'iter':
        strStop = "{} iterations".format(thresh)
    elif stopType == 'cost':
        strStop = "costs change < {}".format(thresh)
    else:
        strStop = "gradient norm < {}".format(thresh)
    name += strStop
    print("***{}\nTheta: {} - Iter: {} - Last cost: {:03.2f} - Duration: {:03.2f}s".format(
        name, theta, iter, costs[-1], dur))

    fig, ax = plt.subplots(figsize=(12, 4))
    ax.plot(np.arange(len(costs)), costs, 'r')
    ax.set_xlabel('iterations')
    ax.set_ylabel('Cost')
    ax.set_title(name.upper() + ' - Error vs. iterations')
    fig.show()
    return theta

 

精度

def predict(X, theta):
    """预测值"""
    # 设定阈值x
    return [1 if x >= 0.5 else 0 for x in model(X, theta)]
# 数据准备
path = 'LogiReg_data.txt'
pdData = pd.read_csv(path, header=None, names=['Exam 1', 'Exam 2', 'Admitted'])
# 插入全为1的列
pdData.insert(0, 'Ones', 1)
# dataframe转换成array类型
orig_data = pdData.values
# 数据数量
n = orig_data.shape[0]

# 数据标准化处理
from sklearn import preprocessing as pp
scaled_data = orig_data.copy()
scaled_data[:, 1:3] = pp.scale(orig_data[:, 1:3])

# 执行逻辑回归
theta = runExpe(scaled_data, n, 'iter', 5000, 0.01)

scaled_X = scaled_data[:, :3]
y = scaled_data[:, 3]
# 预测值
predictions = predict(scaled_X, theta)
# 计算准确率
correct = 0
for num, i in enumerate(predictions):
    if i == y[num]:
        correct += 1
print('accuracy = {}%'.format(correct / len(predictions)))

完整代码

import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
import numpy.random
import time


def show_data():
    """显示原始数据"""
    # pdData.head()  # 显示头5条

    positive = pdData[pdData['Admitted'] == 1]
    negative = pdData[pdData['Admitted'] == 0]

    fig, ax = plt.subplots(figsize=(10, 5))
    ax.scatter(positive['Exam 1'], positive['Exam 2'], s=30, c='b', marker='o', label='Admitted')
    ax.scatter(negative['Exam 1'], negative['Exam 2'], s=30, c='r', marker='x', label='Not Admitted')
    ax.legend()
    ax.set_xlabel('Exam 1 Score')
    ax.set_ylabel('Exam 2 Score')
    fig.show()


def sigmoid(z):
    """sigmoid函数"""
    return 1 / (1 + np.exp(-z))


def model(X, theta):
    """预测函数"""
    return sigmoid(np.dot(X, theta.T))


def cost(X, y, theta):
    """损失函数"""
    left = np.multiply(-y, np.log(model(X, theta)))
    right = np.multiply(1 - y, np.log(1 - model(X, theta)))
    return np.sum(left - right) / (len(X))


def gradient(X, y, theta):
    """计算梯度函数"""
    # 梯度和θ参数一一对应
    grad = np.zeros(theta.shape)
    error = (model(X, theta) - y).ravel()
    for j in range(len(theta.ravel())):
        term = np.multiply(error, X[:, j])  # 都是一维
        grad[0, j] = np.sum(term) / len(X)

    return grad


def shuffleData(data):
    """洗牌,打乱数据"""
    np.random.shuffle(data)
    cols = data.shape[1]
    X = data[:, 0:cols-1]
    y = data[:, cols-1:]
    return X, y


def descent(data, batchSize, stopType, thresh, alpha):
    """梯度下降求解"""
    
    # 初始化
    init_time = time.time()  # 起始时间
    i = 0  # 迭代次数
    k = 0  # batch   
    X, y = shuffleData(data)  # 洗牌X
    theta = np.zeros([1, X.shape[1]])  # θ参数
    costs = [cost(X, y, theta)]  # 损失值

    while True:
        grad = gradient(X[k: k+batchSize], y[k: k+batchSize], theta)
        k += batchSize
        if k >= n:
            k = 0
            X, y = shuffleData(data)  # 重新洗牌
        theta = theta - alpha * grad  # 参数更新
        costs.append(cost(X, y, theta))  # 计算新的损失
        i += 1
                
        # 三种停止策略
        if (
            (stopType == 'iter' and i > thresh) or
            (stopType == 'cost' and abs(costs[-1] - costs[-2]) < thresh) or
            (stopType == 'grad' and np.linalg.norm(grad) < threshold)
        ):
            break
    
    return theta, i-1, costs, grad, time.time()-init_time


def runExpe(data, batchSize, stopType, thresh, alpha):
    theta, iter, costs, grad, dur = descent(data, batchSize, stopType, thresh, alpha)
   
    # 可视化处理
    name = "Original" if (data[:, 1] > 2).sum() > 0 else "Scaled"
    name += " data - learning rate: {} - ".format(alpha)

    if batchSize == n:
        strDescType = "Gradient"
    elif batchSize == 1:
        strDescType = "Stochastic"
    else:
        strDescType = "Mini-batch ({})".format(batchSize)
    name += strDescType + " descent - Stop: "

    if stopType == 'iter':
        strStop = "{} iterations".format(thresh)
    elif stopType == 'cost':
        strStop = "costs change < {}".format(thresh)
    else:
        strStop = "gradient norm < {}".format(thresh)
    name += strStop
    print("***{}\nTheta: {} - Iter: {} - Last cost: {:03.2f} - Duration: {:03.2f}s".format(
        name, theta, iter, costs[-1], dur))

    fig, ax = plt.subplots(figsize=(12, 4))
    ax.plot(np.arange(len(costs)), costs, 'r')
    ax.set_xlabel('iterations')
    ax.set_ylabel('Cost')
    ax.set_title(name.upper() + ' - Error vs. iterations')
    fig.show()
    return theta


def predict(X, theta):
    """预测值"""
    # 设定阈值x
    return [1 if x >= 0.5 else 0 for x in model(X, theta)]


path = 'LogiReg_data.txt'
pdData = pd.read_csv(path, header=None, names=['Exam 1', 'Exam 2', 'Admitted'])
# 插入全为1的列
pdData.insert(0, 'Ones', 1)
# dataframe转换成array类型
orig_data = pdData.values
# 数据数量
n = orig_data.shape[0]

# 数据标准化处理
from sklearn import preprocessing as pp
scaled_data = orig_data.copy()
scaled_data[:, 1:3] = pp.scale(orig_data[:, 1:3])

theta = runExpe(scaled_data, n, 'iter', 5000, 0.01)

scaled_X = scaled_data[:, :3]
y = scaled_data[:, 3]
# 预测值
predictions = predict(scaled_X, theta)
# 计算准确率
correct = 0
for num, i in enumerate(predictions):
    if i == y[num]:
        correct += 1
print('accuracy = {}%'.format(correct / len(predictions)))

附录

梯度下降三种比较方法

目标函数 J ( θ ) = 1 2 m ∑ i = 1 m ( y i − h θ ( x i ) ) 2 J(\theta) = \frac{1}{2m}\sum\limits_{i=1}^{m} (y^i - h_\theta(x^i))^2 J(θ)=2m1i=1m(yihθ(xi))2

  • 批量梯度下降: ∂ ∂ θ j J ( θ ) = − 1 m ∑ i = 1 m ( y i − h θ ( x i ) ) x j i \frac{\partial}{\partial\theta_j} J(\theta) = -\frac{1}{m} \sum\limits_{i=1}^{m} (y^i - h_\theta(x^i))x^{i}_{j} θjJ(θ)=m1i=1m(yihθ(xi))xji

(容易得到最优解,每次考虑所有样本,速度慢)

  • 随机梯度下降: θ j ′ = θ j + ( y i − h θ ( x i ) ) x j i \theta_j^{'} = \theta_j + (y^i - h_\theta(x^i)) x^i_j θj=θj+(yihθ(xi))xji

(每次一个样本,迭代速度快,但不一定每次朝收敛方向)

  • 小批量梯度下降: θ j : = θ j − α 1 10 ∑ k = i i + 9 ( h θ ( x ( k ) ) − y ( k ) ) x j ( k ) \theta_j:= \theta_j - \alpha \frac{1}{10} \sum\limits_{k=i}^{i+9} (h_\theta(x^{(k)}) - y^{(k)}) x^{(k)}_j θj:=θjα101k=ii+9(hθ(x(k))y(k))xj(k)

(每次更新选择一小部分数据来算,实用!)

 

似然函数与对数似然函数

似然函数: L ( θ ) = ∏ i = 1 m P ( y i ∣ x i ; θ ) = ∏ i = 1 m ( h θ ( x i ) ) y i ( 1 − h θ ( x i ) ) 1 − y i L(\theta) = \prod_{i=1}^{m}P(y^i | x^i; \theta) = \prod_{i=1}^{m}(h\theta(x^i))^{y^i}(1-h\theta(x^i))^{1-y^i} L(θ)=i=1mP(yixi;θ)=i=1m(hθ(xi))yi(1hθ(xi))1yi

对数似然函数: l ( θ ) = log ⁡ L ( θ ) = ∑ i = 1 m ( y i log ⁡ h θ ( x i ) + ( 1 − y i ) log ⁡ ( 1 − h θ ( x i ) ) ) l(\theta) = \log L(\theta) = \sum\limits_{i=1}^m(y^i \log h_\theta(x^i) + (1-y^i) \log(1-h_\theta(x^i))) l(θ)=logL(θ)=i=1m(yiloghθ(xi)+(1yi)log(1hθ(xi)))

你可能感兴趣的:(机器学习项目实战——预测学生是否被录取)