最大熵模型可以应用于多类别分类,因此这里直接使用原手写识别mnist十类的数据;即 https://github.com/phdsky/ML/tree/master/data 中的 mnist.csv。
书上的公式看起来有点杂乱,这里简要列一下最大熵模型公式的推导过程:
我错了,重新看了一遍,书上的推导和证明写的很清楚。
书上推导和证明中的重要概念:
最大熵模型学习的目标是用最大熵原理在训练数据中选择最好的模型,倾向认为在未确定的事件概率是等价的。
特征函数/指示函数 指示了输入 x x x 和输出 y y y 之间存在的一种关系;这里的 x x x 和 y y y 是一一对应的,也就是说它与其他分类器不同的是:最大熵模型中的 f ( x , y ) f(x,y) f(x,y) 中的 x x x 是单独的一个特征,不是一个 n n n 维特征向量,因此我们需要对每个维度特征加一个区分标签,如 X = ( x 0 , x 1 , x 2 , . . . x n ) X=(x_0,x_1,x_2,...x_n) X=(x0,x1,x2,...xn) 改写为 X = ( 0 − x 0 , 1 − x 1 , 2 − x 2 , . . . n − x n ) X=(0-x_0,1-x_1,2-x_2,...n-x_n) X=(0−x0,1−x1,2−x2,...n−xn),表示 x x x 维度取值与输出 y y y 之间的一种关系, X X X 在本例中指代一个样本的784维数据。
f ( x , y ) = { 1 , x 与 y 满 足 某 种 关 系 0 , 否 则 f(x, y) = \left\{ \begin{aligned} &1,x与y满足某种关系 \\ &0,否则 \end{aligned} \right. f(x,y)={1,x与y满足某种关系0,否则
下面引入两个重要的期望,它们之间的关系构成了最大熵模型的约束条件:
特征函数 f ( x , y ) f(x, y) f(x,y) 关于经验分布 P ~ ( X , Y ) \tilde{P}(X, Y) P~(X,Y) 的期望 E p ~ ( f i ) E_{\tilde{p}}(f_i) Ep~(fi):
E p ~ ( f i ) = ∑ x , y P ~ ( x , y ) f ( x , y ) E_{\tilde{p}}(f_i) = \sum_{x,y}\tilde{P}(x,y)f(x,y) Ep~(fi)=x,y∑P~(x,y)f(x,y)
特征函数 f ( x , y ) f(x, y) f(x,y) 关于模型 P ( Y ∣ X ) P(Y|X) P(Y∣X) 与经验分布 P ~ ( X ) \tilde{P}(X) P~(X) 的期望 E p ( f i ) E_p(f_i) Ep(fi):
E p ( f i ) = ∑ x , y P ~ ( x ) P ( y ∣ x ) f ( x , y ) E_p(f_i) = \sum_{x,y}\tilde{P}(x)P(y|x)f(x,y) Ep(fi)=x,y∑P~(x)P(y∣x)f(x,y)
如果模型能从训练数据中学习到东西,那么就可以假定上面两者的期望相等;这样就构成了模型的约束条件,如果特征函数有多个就有多个约束条件。
最大熵模型的定义,就是在上面约束条件下:
C ≡ { P ∈ P ∣ E p ( f i ) = E p ~ ( f i ) , i = 1 , 2 , . . . , n } C \equiv \left\{P \in \mathcal{P} | E_p(f_i) = E_{\tilde{p}}(f_i), i=1,2,...,n \right\} C≡{P∈P∣Ep(fi)=Ep~(fi),i=1,2,...,n}
使得定义在条件概率分布 P ( Y ∣ X ) P(Y|X) P(Y∣X) 上的条件熵最大的模型最大:
H ( P ) = − ∑ x , y P ~ ( x ) P ( y ∣ x ) log P ( y ∣ x ) H(P) = -\sum_{x,y}\tilde{P}(x)P(y|x)\log{P(y|x)} H(P)=−x,y∑P~(x)P(y∣x)logP(y∣x)
上式可以转述表达为如下式子:
max P ∈ C H ( P ) = − ∑ x , y P ~ ( x ) P ( y ∣ x ) log P ( y ∣ x ) \max_{P \in C} \quad H(P) = -\sum_{x,y}\tilde{P}(x)P(y|x)\log{P(y|x)} P∈CmaxH(P)=−x,y∑P~(x)P(y∣x)logP(y∣x)
s . t . E p ( f i ) = E p ~ ( f i ) , i = 1 , 2 , . . . , n ∑ y P ( y ∣ x ) = 1 \begin{aligned} s.t.\quad &E_p(f_i) = E_{\tilde{p}}(f_i), i=1,2,...,n \\ &\sum_y{P(y|x)}=1 \end{aligned} s.t.Ep(fi)=Ep~(fi),i=1,2,...,ny∑P(y∣x)=1
引出上述式子后,则可以构建拉格朗日函数并利用对偶性对其进行求解(6.14 ~ 6.25);同时可以证得(6.26 ~ 6.27):最大熵模型学习中的对偶函数极大化 等价于 最大熵模型的极大似然估计。
由于上述步骤中最大熵的学习问题已经转化成了求解对数似然函数极大化或对偶函数极大化的问题;根据上一步陈述可以进一步将学习问题转换为对模型进行极大似然估计或正则化的极大似然估计。由此可以将最大熵模型写成如下更一般的形式:
最大熵模型为:
P ( y ∣ x ) = 1 Z w ( x ) e x p ⟨ ∑ i = 1 n w i f i ( x , y ) ⟩ = e x p ⟨ ∑ i = 1 n w i f i ( x , y ) ⟩ ∑ y ⟨ ∑ i = 1 n w i f i ( x , y ) ⟩ \begin{aligned} P(y|x) &= \frac{1}{Z_w(x)}exp\bigg\langle\sum_{i=1}^nw_if_i(x,y)\bigg\rangle \\ &=\frac{exp\bigg\langle\sum\limits_{i=1}^nw_if_i(x,y)\bigg\rangle}{\sum\limits_y \bigg\langle\sum\limits_{i=1}^nw_if_i(x,y)\bigg\rangle} \end{aligned} P(y∣x)=Zw(x)1exp⟨i=1∑nwifi(x,y)⟩=y∑⟨i=1∑nwifi(x,y)⟩exp⟨i=1∑nwifi(x,y)⟩
其中:
Z w ( x ) = ∑ y ⟨ ∑ i = 1 n w i f i ( x , y ) ⟩ Z_w(x) = \sum_y \bigg\langle\sum_{i=1}^nw_if_i(x,y)\bigg\rangle Zw(x)=y∑⟨i=1∑nwifi(x,y)⟩
极大似然函数为:
L ( w ) = ∑ x , y P ~ ( x , y ) ∑ i = 1 n w i f i ( x , y ) − ∑ x P ~ ( x ) log Z w ( x ) L(w) = \sum_{x,y}\tilde{P}(x,y)\sum_{i=1}^nw_if_i(x,y) - \sum_x\tilde{P}(x)\log{Z_w(x)} L(w)=x,y∑P~(x,y)i=1∑nwifi(x,y)−x∑P~(x)logZw(x)
通过极大似然估计学习模型参数,求得对数似然函数的极大值 w ^ \hat{w} w^ ,代入模型表达式则得到最大熵模型。
逻辑斯蒂回归模型、最大熵模型学习归结为以似然函数为目标函数的最优化问题,通常通过迭代算法进行求解,从最优化的观点来看,此时的目标函数具有很好的性质,它是光滑的凸函数,因此必有全局最优解;上述表达式可以用很多方法来求解,常用的方法有改进迭代尺度法、梯度下降法、牛顿法或拟牛顿法。牛顿法或拟牛顿法一般收敛速度比较快。
改进迭代尺度算法思想比较简单:
书上通过引入 f ♯ ( x , y ) = ∑ i f i ( x , y ) = M f^\sharp(x,y) = \sum_if_i(x,y) = M f♯(x,y)=∑ifi(x,y)=M ,证得每次参数更新时,似然函数增长有一个下紧确界,因此必然能够求得函数的最优解;具体证明在(6.30 ~ 6.33)
下图是改进迭代尺度IIS的算法流程,也是后面要实现的算法:
看懂了模型推导和证明,然后看上面IIS的步骤,第一次的时候还是有点懵比,所以刚开始写了一下算法要求的变量和要实现的式子(令稿些许潦草),下图左边是要求的变量,右边是要实现的式子:
实现过程中:
δ i = 1 M log E p ~ ( f i ) E p ( f i ) = 1 M log ∑ x , y P ~ ( x , y ) f ( x , y ) ∑ x , y P ~ ( x ) P ( y ∣ x ) f ( x , y ) (1) \begin{aligned} \delta_i &= \frac{1}{M}\log\frac{E_{\tilde{p}}(f_i)}{E_p(f_i)} \\ &= \frac{1}{M}\log\frac{\sum_{x,y}\tilde{P}(x,y)f(x,y)}{\sum_{x,y}\tilde{P}(x)P(y|x)f(x,y)} \end{aligned} \tag{1} δi=M1logEp(fi)Ep~(fi)=M1log∑x,yP~(x)P(y∣x)f(x,y)∑x,yP~(x,y)f(x,y)(1)
{ E p ~ ( f i ) = ∑ x , y P ~ ( x , y ) f ( x , y ) E p ( f i ) = ∑ x , y P ~ ( x ) P ( y ∣ x ) f ( x , y ) (2) \left\{ \begin{aligned} E_{\tilde{p}}(f_i) &= \sum_{x,y}\tilde{P}(x,y)f(x,y) \\ E_p(f_i) &= \sum_{x,y}\tilde{P}(x)P(y|x)f(x,y) \end{aligned} \right. \tag{2} ⎩⎪⎪⎪⎨⎪⎪⎪⎧Ep~(fi)Ep(fi)=x,y∑P~(x,y)f(x,y)=x,y∑P~(x)P(y∣x)f(x,y)(2)
P ( y ∣ x ) = 1 Z w ( x ) e x p ⟨ ∑ i = 1 n w i f i ( x , y ) ⟩ = e x p ⟨ ∑ i = 1 n w i f i ( x , y ) ⟩ ∑ y ⟨ ∑ i = 1 n w i f i ( x , y ) ⟩ (3) \begin{aligned} P(y|x) &= \frac{1}{Z_w(x)}exp\bigg\langle\sum_{i=1}^nw_if_i(x,y)\bigg\rangle \\ &=\frac{exp\bigg\langle\sum\limits_{i=1}^nw_if_i(x,y)\bigg\rangle}{\sum\limits_y \bigg\langle\sum\limits_{i=1}^nw_if_i(x,y)\bigg\rangle} \end{aligned} \tag{3} P(y∣x)=Zw(x)1exp⟨i=1∑nwifi(x,y)⟩=y∑⟨i=1∑nwifi(x,y)⟩exp⟨i=1∑nwifi(x,y)⟩(3)
Z w ( x ) = ∑ y ⟨ ∑ i = 1 n w i f i ( x , y ) ⟩ (4) Z_w(x) = \sum_y \bigg\langle\sum_{i=1}^nw_if_i(x,y)\bigg\rangle \tag{4} Zw(x)=y∑⟨i=1∑nwifi(x,y)⟩(4)
# @Author: phd
# @Date: 2019/8/19
# @Site: github.com/phdsky
# @Description: NULL
import time
import logging
import numpy as np
import pandas as pd
from collections import defaultdict
from sklearn.model_selection import train_test_split
def log(func):
def wrapper(*args, **kwargs):
start_time = time.time()
ret = func(*args, **kwargs)
end_time = time.time()
logging.debug('%s() cost %s seconds' % (func.__name__, end_time - start_time))
return ret
return wrapper
def calc_accuracy(y_pred, y_truth):
assert len(y_pred) == len(y_truth)
n = len(y_pred)
hit_count = 0
for i in range(0, n):
if y_pred[i] == y_truth[i]:
hit_count += 1
print("Predicting accuracy %f" % (hit_count / n))
class maxEnt(object):
def init_params(self, X_train, y_train):
assert(len(X_train) == len(y_train))
self.labels = set()
self.cal_Vxy(X_train, y_train)
self.N = len(X_train) # Training set number
self.n = len(self.Vxy) # Feature counts
self.M = 10000.0 # A constant value depends on training set
self.iter = 500
self.build_dict()
self.cal_Pxy() # Equals to Ep~fi
def cal_Vxy(self, X_train, y_train):
# defaultdict: Do not need to judge whether key is in dict or not
self.Vxy = defaultdict(int)
# Count the V(X=x, Y=y) feature counts in all samples
for i in range(0, len(y_train)):
sample = X_train[i]
label = y_train[i]
self.labels.add(label)
for feature in sample:
self.Vxy[(feature, label)] += 1
def build_dict(self):
# self.Vxy: key: (x, y) <----> value: feature counts
# Use id key to index
self.id2xy = {}
self.xy2id = {}
for id, xy in enumerate(self.Vxy):
self.id2xy[id] = xy
self.xy2id[xy] = id
def cal_Pxy(self):
self.Pxy = np.full((self.n, 1), 0.0, dtype=float)
for id in range(0, self.n):
xy = self.id2xy[id]
feature_counts = self.Vxy[xy]
self.Pxy[id] = feature_counts / float(self.N)
def cal_Zx(self, sample):
Zx = defaultdict(float)
for label in self.labels:
weights = 0.0
for feature in sample:
xy = (feature, label)
if xy in self.xy2id:
id = self.xy2id[xy]
weights += self.weight[id]
Zx[label] = np.exp(weights)
return Zx
def cal_Pyx(self, sample):
Pyx = defaultdict(float)
Zx = self.cal_Zx(sample)
Zwx = sum(Zx.values())
for key in Zx.keys():
Pyx[key] = Zx[key] / Zwx
return Pyx
def cal_Epfi(self, X_train):
Epfi = np.full((self.n, 1), 0.0, dtype=float)
for sample in X_train:
Pyx = self.cal_Pyx(sample)
for feature in sample:
for label in Pyx.keys():
xy = (feature, label)
if xy in self.xy2id:
id = self.xy2id[xy]
# Calculate P(y|x)*P~(x)f(x, y)
# += means every time calculate one to empirical distribution
Epfi[id] += Pyx[label] * (1 / self.N)
return Epfi
@log
def train(self, X_train, y_train):
self.init_params(X_train, y_train)
self.weight = np.full((self.n, 1), 0.0, dtype=float)
for it in range(0, self.iter):
print("Iteration number: %d" % it)
Epfi = self.cal_Epfi(X_train)
delta = 1 / self.M * np.log(self.Pxy / Epfi)
self.weight += delta
@log
def predict(self, X_test):
n = len(X_test)
predict_label = np.full(n, -1)
for i in range(0, n):
to_predict = X_test[i]
Pyx = self.cal_Pyx(to_predict)
max_prob = max(zip(Pyx.values(), Pyx.keys()))
predict_label[i] = max_prob[-1]
return predict_label
def rebuid_features(subsets):
features = []
for sample in subsets:
feature = []
for index, value in enumerate(sample):
feature.append(str(index) + '_' + str(value))
features.append(feature)
return features
if __name__ == "__main__":
logger = logging.getLogger()
logger.setLevel(logging.DEBUG)
mnist_data = pd.read_csv("../data/mnist.csv")
mnist_values = mnist_data.values
sample_num = 5000
images = mnist_values[:sample_num, 1::]
labels = mnist_values[:sample_num, 0]
X_train, X_test, y_train, y_test = train_test_split(
images, labels, test_size=0.33, random_state=42
)
X_train = rebuid_features(subsets=X_train)
X_test = rebuid_features(subsets=X_test)
max_ent = maxEnt()
print("Training max entropy model...")
max_ent.train(X_train=X_train, y_train=y_train)
print("Training done...")
print("Testing on %d samples..." % len(X_test))
y_predicted = max_ent.predict(X_test=X_test)
calc_accuracy(y_pred=y_predicted, y_truth=y_test)
代码输出结果:
/Users/phd/Softwares/anaconda3/bin/python /Users/phd/Desktop/ML/maxEnt/maxEnt.py
Training max entropy model...
Iteration number: 0
Iteration number: 1
Iteration number: 2
Iteration number: 3
Iteration number: 4
... 中间略过
Iteration number: 499
Training done...
Testing on 1650 samples...
DEBUG:root:train() cost 53459.26920700073 seconds
DEBUG:root:predict() cost 19.44344210624695 seconds
Predicting accuracy 0.822424
Process finished with exit code 0
从结果可以看到,仅仅使用了5000*0.66个样本训练就训练了将近15个小时。。。虽然算法准确率还可以,但是太费时间了,由此推之最大熵模型对于高维数据不太实用;另外代码能写成向量计算的就写成向量形式吧,不然算得更慢。
基本上都在模型推导一节了