机器学习之EM算法的原理及推导(三硬币模型)及Python实现

EM算法的简介
EM算法由两步组成:E步和M步,是最常用的迭代算法。

本文主要参考了李航博士的《统计学习方法》
在此基础上主要依据EM算法原理补充了三硬币模型的推导。

1.EM算法的原理

1.1从一个例子开始

三硬币模型
假设有3枚硬币,分别记作A,B和C。 这些硬币正面向上的概率分别是 π , p \pi,p π,p q q q 。进行如下抛硬币试验:
1、先抛硬币A, 根据其结果选出硬币B或者硬币C,正面选硬币B,反面选硬币C;
2、然后掷选出的硬币,抛硬币的结果,出现正面记作1,出现反面记作0;
3、独立重复 n n n次试验(这里,n=10),观测结果如下:
1 , 1 , 0 , 1 , 0 , 0 , 1 , 0 , 1 , 1 1,1,0,1,0,0,1,0,1,1 1,1,0,1,0,0,1,0,1,1
假设只能观测到掷硬币的结果,不能观测掷硬币的过程。问如何估计三硬币正面出现的概率,即三硬币模型参数。

对于单次观测结果,三硬币模型可以写作:
p ( y j ∣ θ ) = ∑ z p ( y j , z ∣ θ ) = ∑ z p ( z ∣ θ ) p ( y j ∣ z , θ ) = π p y j ( 1 − p ) 1 − y j + ( 1 − π ) q y j ( 1 − q ) 1 − y j \begin{aligned} p(y_j|\theta) &= \sum_z p(y_j,z| \theta) \\ &= \sum_z p(z|\theta) p(y_j|z,\theta) \\ &= \pi p^{y_j} (1-p)^{1-y_j} + (1-\pi)q^{y_j} (1-q)^{1-y_j} \end{aligned} p(yjθ)=zp(yj,zθ)=zp(zθ)p(yjz,θ)=πpyj(1p)1yj+(1π)qyj(1q)1yj

其中, y i y_i yi是第 j j j个观测结果1或0;随机变量 z z z是隐变量,表示未观测到的掷硬币A的结果; θ = ( π , p , q ) \theta=(\pi,p,q) θ=(π,p,q)是模型参数。
方便起见,观测数据可以表示为 y = ( y 1 , y 2 , . . . , y n ) T y=(y_1,y_2,...,y_n)^T y=(y1,y2,...,yn)T,隐变量数据可以表示为 z = ( z 1 , z 2 , . . . , z n ) T z=(z_1,z_2,...,z_n)^T z=(z1,z2,...,zn)T。观测数据的似然函数可以表示为:
p ( y ∣ θ ) = ∑ z p ( z ∣ θ ) p ( y ∣ z , θ ) p(y|\theta) = \sum_z p(z|\theta) p(y|z,\theta) p(yθ)=zp(zθ)p(yz,θ)

即:

p ( y ∣ θ ) = ∏ j = 1 n [ π p y j ( 1 − p ) 1 − y j + ( 1 − π ) q y j ( 1 − q ) 1 − y j ] (1) p(y|\theta) = \prod_{j=1}^n [ \pi p^{y_j} (1-p)^{1-y_j} + (1-\pi)q^{y_j} (1-q)^{1-y_j}] \tag1 p(yθ)=j=1n[πpyj(1p)1yj+(1π)qyj(1q)1yj](1)

这个问题没有办法直接解析,只能用迭代的方法解决。下面我们先看看EM算法的推导,之后重新再对这个问题进行推导。

1.2 EM算法推导
1.2.1 Jensen 不等式说明

首先说明一下什么是凸函数:
粗糙一点理解,如果函数的二阶导数为正数,那么这个函数就是凸函数:比如开口向上的二次函数就是典型的凸函数。

若有凸函数 f ( x ) f(x) f(x),且在函数中取自变量点集 { x i } \{x_i\} {xi},且取对应 { λ i } \{ \lambda_i\} {λi},满足 λ i > 0 , ∑ λ i = 1 \lambda_i>0,\sum \lambda_i=1 λi>0,λi=1,
则有:

f ( ∑ i λ i x i ) ≤ ∑ i λ i f ( x i ) f(\sum_i \lambda_i x_i) \le \sum_i \lambda_i f(x_i) f(iλixi)iλif(xi)

x > 0 x>0 x>0时, − log ⁡ ( x ) -\log(x) log(x)的二阶导数 1 x 2 > 0 \frac{1}{x^2} > 0 x21>0,故可对 − l o g ( x ) -log(x) log(x)运用Jessen不等式。
如果是对于 l o g ( x ) log(x) log(x)运用Jessen不等式,不等式方向要变号。

1.2.2 EM算法推导

求解型入(1)式的问题,我们取对数似然函数,也就是对对数似然函数求取极大值:
L ( θ ) = log ⁡ p ( y ∣ θ ) = log ⁡ ( ∑ z p ( y ∣ z , θ ) p ( z ∣ θ ) ) L(\theta) = \log p(y|\theta) = \log (\sum_z p(y|z,\theta) p(z|\theta)) L(θ)=logp(yθ)=log(zp(yz,θ)p(zθ))

运用迭代的思想解决这个问题,假设在第 i i i次迭代后 θ \theta θ的估计值是 θ i \theta^i θi。因为想要求取最大对数似然,所以我们希望 L ( θ ) > L ( θ i ) L(\theta)>L(\theta^i) L(θ)>L(θi),并逐步达到极大值,也就是它们的差值达到最大值:

L ( θ ) − L ( θ i ) = log ⁡ ( ∑ z p ( y ∣ z , θ ) p ( z ∣ θ ) ) − l o g p ( y ∣ θ i ) L(\theta) - L(\theta^i) = \log (\sum_z p(y|z,\theta) p(z|\theta)) - log p(y|\theta^i) L(θ)L(θi)=log(zp(yz,θ)p(zθ))logp(yθi)

利用Jensen不等式,得到其下界:

L ( θ ) − L ( θ i ) = log ⁡ ( ∑ z p ( y ∣ z , θ ) p ( z ∣ θ ) ) − log ⁡ p ( y ∣ θ i ) = log ⁡ ( ∑ z p ( z ∣ y , θ i ) p ( y ∣ z , θ ) p ( z ∣ θ ) p ( z ∣ y , θ i ) ) − log ⁡ p ( y ∣ θ i ) ≥ ∑ z p ( z ∣ y , θ i ) log ⁡ p ( y ∣ z , θ ) p ( z ∣ θ ) p ( z ∣ y , θ i ) − log ⁡ p ( y ∣ θ i ) = ∑ z p ( z ∣ y , θ i ) log ⁡ p ( y ∣ z , θ ) p ( z ∣ θ ) p ( z ∣ y , θ i ) − ∑ z p ( z ∣ y , θ i ) log ⁡ p ( y ∣ θ i ) = ∑ z p ( z ∣ y , θ i ) log ⁡ p ( y ∣ z , θ ) p ( z ∣ θ ) p ( z ∣ y , θ i ) p ( y ∣ θ i ) \begin{aligned} L(\theta) - L(\theta^i) &= \log (\sum_z p(y|z,\theta) p(z|\theta)) - \log p(y|\theta^i) \\ &= \log (\sum_z p(z|y,\theta^i) \frac {p(y|z,\theta) p(z|\theta)}{p(z|y,\theta^i)}) - \log p(y|\theta^i) \\ &\ge \sum_z p(z|y,\theta^i) \log \frac{p(y|z,\theta) p(z|\theta)}{p(z|y,\theta^i)} - \log p(y|\theta^i) \\ &=\sum_z p(z|y,\theta^i) \log \frac {p(y|z,\theta) p(z|\theta)}{p(z|y,\theta^i)} - \sum_z p(z|y,\theta^i) \log p(y|\theta^i) \\ &=\sum_z p(z|y,\theta^i) \log \frac {p(y|z,\theta) p(z|\theta)}{p(z|y,\theta^i) p(y|\theta^i)} \\ \end{aligned} L(θ)L(θi)=log(zp(yz,θ)p(zθ))logp(yθi)=log(zp(zy,θi)p(zy,θi)p(yz,θ)p(zθ))logp(yθi)zp(zy,θi)logp(zy,θi)p(yz,θ)p(zθ)logp(yθi)=zp(zy,θi)logp(zy,θi)p(yz,θ)p(zθ)zp(zy,θi)logp(yθi)=zp(zy,θi)logp(zy,θi)p(yθi)p(yz,θ)p(zθ)

J ( θ ) = L ( θ ) − L ( θ i ) J(\theta) =L(\theta) - L(\theta^i) J(θ)=L(θ)L(θi),则求取的是:

θ ( i + 1 ) = arg ⁡ max ⁡ θ J ( θ ) \theta^{(i+1)} = \arg \max_\theta J(\theta) θ(i+1)=argθmaxJ(θ)

J ( θ ) = { ∑ z p ( z ∣ y , θ i ) log ⁡ [ p ( y ∣ z , θ ) p ( z ∣ θ ) ] } − { ∑ z p ( z ∣ y , θ i ) log ⁡ [ p ( z ∣ y , θ i ) p ( y ∣ θ i ) ] } J(\theta) = \{ \sum_z p(z|y,\theta^i) \log [{p(y|z,\theta) p(z|\theta)}] \}- \{ \sum_z p(z|y,\theta^i) \log [{p(z|y,\theta^i) p(y|\theta^i)}] \} J(θ)={zp(zy,θi)log[p(yz,θ)p(zθ)]}{zp(zy,θi)log[p(zy,θi)p(yθi)]}

因为后一项中无 θ \theta θ项,故:

θ ( i + 1 ) = arg ⁡ max ⁡ θ ∑ z p ( z ∣ y , θ i ) log ⁡ [ p ( y ∣ z , θ ) p ( z ∣ θ ) ] \theta^{(i+1)} = \arg \max_\theta \sum_z p(z|y,\theta^i) \log [{p(y|z,\theta) p(z|\theta)}] θ(i+1)=argθmaxzp(zy,θi)log[p(yz,θ)p(zθ)]

因为:

p ( y ∣ z , θ ) p ( z ∣ θ ) = p ( y , z ∣ θ ) {p(y|z,\theta) p(z|\theta)} = p(y,z|\theta) p(yz,θ)p(zθ)=p(y,zθ)

设:

Q ( θ , θ i ) = ∑ z p ( z ∣ y , θ i ) log ⁡ p ( y , z ∣ θ ) (2) Q(\theta, \theta^i) = \sum_z p(z|y,\theta^i) \log p(y,z|\theta) \tag2 Q(θ,θi)=zp(zy,θi)logp(y,zθ)(2)

则:

θ ( i + 1 ) = arg ⁡ max ⁡ θ Q ( θ , θ i ) (3) \theta^{(i+1)} = \arg \max_\theta Q(\theta, \theta^i) \tag3 θ(i+1)=argθmaxQ(θ,θi)(3)

EM算法的总结:
E步(求隐变量 p ( z ∣ y , θ i ) p(z|y,\theta_i) p(zy,θi)):给定观测数据 y y y和当前的参数估计 θ i \theta_i θi,求取隐变量 z z z的条件概率分布;
M步:将隐变量当做已知量,求 Q ( θ , θ i ) Q(\theta,\theta_i) Q(θ,θi)的极大化的 θ \theta θ
E步和M步重复执行,直到收敛。

1.3 三硬币模型继续推导
1.3.1 三硬币模型的隐变量以及完全数据的对数似然函数

我们已知:

p ( y ∣ θ ) = ∏ j = 1 n [ π p y j ( 1 − p ) 1 − y j + ( 1 − π ) q y j ( 1 − q ) 1 − y j ] p(y|\theta) = \prod_{j=1}^n [ \pi p^{y_j} (1-p)^{1-y_j} + (1-\pi)q^{y_j} (1-q)^{1-y_j}] p(yθ)=j=1n[πpyj(1p)1yj+(1π)qyj(1q)1yj]

y j y_j yj来自掷硬币B的概率为 μ j \mu_j μj, 则来自C的概率为 1 − μ j 1-\mu_j 1μj,且 μ j ∈ { 0 , 1 } , j = 1 , 2 , . . . , n \mu_j \in \{0,1\},j=1,2,...,n μj{0,1},j=1,2,...,n。即参数 μ \mu μ为模型的隐变量。

于是完全数据的似然函数可以表示为:

p ( y , μ ∣ θ ) = ∏ j = 1 n { [ π p y j ( 1 − p ) 1 − y j ] μ + [ ( 1 − π ) q y j ( 1 − q ) 1 − y j ] ( 1 − μ ) } p(y,\mu|\theta) = \prod_{j=1}^n \{ [ \pi p^{y_j} (1-p)^{1-y_j}]^\mu + [(1-\pi)q^{y_j} (1-q)^{1-y_j}]^{(1-\mu)} \} p(y,μθ)=j=1n{[πpyj(1p)1yj]μ+[(1π)qyj(1q)1yj](1μ)}

相应的对数似然函数为:

log ⁡ p ( y , μ ∣ θ ) = ∑ j = 1 n { μ [ log ⁡ π + y j log ⁡ p + ( 1 − y j ) log ⁡ ( 1 − p ) ] + ( 1 − μ ) [ log ⁡ ( 1 − π ) + y j log ⁡ q + ( 1 − y j ) log ⁡ ( 1 − q ) ] } \log p(y,\mu|\theta) = \sum_{j=1}^n \{\mu [\log \pi + y_j \log p + (1-y_j) \log (1-p)] + (1-\mu) [\log(1-\pi) + y_j \log q + (1-y_j)\log(1-q)] \} logp(y,μθ)=j=1n{μ[logπ+yjlogp+(1yj)log(1p)]+(1μ)[log(1π)+yjlogq+(1yj)log(1q)]}

1.3.2 E步:确定 Q Q Q函数

因为EM算法是迭代算法,设第 i i i次迭代的参数估计值为 θ ( i ) = ( π ( i ) , p ( i ) , q ( i ) ) \theta^{(i)}=(\pi^{(i)}, p^{(i)}, q^{(i)}) θ(i)=(π(i),p(i),q(i)),又因为隐变量 μ \mu μ代表观测数据来自B的概率,所以第 ( i + 1 ) (i+1) (i+1)次隐变量:

μ j ( i + 1 ) = π ( i ) ( p ( i ) ) y i ( 1 − p ( i ) ) 1 − y i π ( i ) ( p ( i ) ) y i ( 1 − p ( i ) ) 1 − y i + ( 1 − π ( i ) ) ( q ( i ) ) y i ( 1 − q ( i ) ) 1 − y i \mu_{j}^{(i+1)} = \frac {\pi^{(i)} (p^{(i)})^{y_i} (1-p^{(i)})^{1-y_i}} {\pi^{(i)} (p^{(i)})^{y_i} (1-p^{(i)})^{1-y_i} + (1- \pi^{(i)}) (q^{(i)})^{y_i} (1-q^{(i)})^{1-y_i}} μj(i+1)=π(i)(p(i))yi(1p(i))1yi+(1π(i))(q(i))yi(1q(i))1yiπ(i)(p(i))yi(1p(i))1yi

求取 Q Q Q:

Q ( θ , θ i ) = ∑ z p ( z ∣ y , θ i ) log ⁡ p ( y , z ∣ θ ) = E z [ l o g p ( y , z ∣ θ , θ ( i ) ) ] Q(\theta, \theta_i) = \sum_z p(z|y,\theta_i) \log p(y,z|\theta)=E_z[log p(y,z|\theta,\theta^{(i)})] Q(θ,θi)=zp(zy,θi)logp(y,zθ)=Ez[logp(y,zθ,θ(i))]

μ j ( i + 1 ) \mu_{j}^{(i+1)} μj(i+1)带入则可以得到:

Q ( θ , θ i ) = ∑ j = 1 n { μ j ( i + 1 ) [ log ⁡ π + y j log ⁡ p + ( 1 − y j ) log ⁡ ( 1 − p ) ] + ( 1 − μ j ( i + 1 ) ) [ log ⁡ ( 1 − π ) + y j log ⁡ q + ( 1 − y j ) log ⁡ ( 1 − q ) ] } Q(\theta, \theta_i)=\sum_{j=1}^n \{\mu_{j}^{(i+1)} [\log \pi + y_j \log p + (1-y_j) \log (1-p)] + (1-\mu_{j}^{(i+1)}) [\log(1-\pi) + y_j \log q + (1-y_j)\log(1-q)] \} Q(θ,θi)=j=1n{μj(i+1)[logπ+yjlogp+(1yj)log(1p)]+(1μj(i+1))[log(1π)+yjlogq+(1yj)log(1q)]}

1.3.2 M步

得到了 Q Q Q函数,接下来就是极大化参数:

θ ( i + 1 ) = arg ⁡ max ⁡ θ Q ( θ , θ i ) \theta^{(i+1)} = \arg \max_\theta Q(\theta, \theta^i) θ(i+1)=argθmaxQ(θ,θi)

1.求解 π \pi π:

∂ Q ( θ , θ i ) ∂ π = ∑ j = 1 n [ μ j ( i + 1 ) 1 π − ( 1 − μ j ( i + 1 ) ) 1 1 − π ] \frac{\partial Q(\theta, \theta^i)}{\partial \pi} = \sum_{j=1}^n [\mu_{j}^{(i+1)} \frac{1}{\pi} - (1-\mu_{j}^{(i+1)}) \frac {1}{1-\pi}] πQ(θ,θi)=j=1n[μj(i+1)π1(1μj(i+1))1π1]

求取极值,令等式右边为0:

∑ j = 1 n [ μ j ( i + 1 ) 1 π − ( 1 − μ j ( i + 1 ) ) 1 1 − π ] = 0 \sum_{j=1}^n [\mu_{j}^{(i+1)} \frac{1}{\pi} - (1-\mu_{j}^{(i+1)}) \frac {1}{1-\pi}]=0 j=1n[μj(i+1)π1(1μj(i+1))1π1]=0

左右两边同时乘 π ( 1 − π ) \pi(1-\pi) π(1π)得到:

∑ j = 1 n [ μ j ( i + 1 ) ( 1 − π ) − ( 1 − μ j ( i + 1 ) ) π ] = 0 \sum_{j=1}^n [\mu_{j}^{(i+1)} (1-\pi) - (1-\mu_{j}^{(i+1)}) \pi]=0 j=1n[μj(i+1)(1π)(1μj(i+1))π]=0

∑ j = 1 n ( μ j ( i + 1 ) − π ) = 0 \sum_{j=1}^n (\mu_{j}^{(i+1)} - \pi)=0 j=1n(μj(i+1)π)=0

∑ j = 1 n μ j ( i + 1 ) − n π = 0 \sum_{j=1}^n \mu_{j}^{(i+1)} - n \pi=0 j=1nμj(i+1)nπ=0

则:
π ( i + 1 ) = 1 n ∑ j = 1 n μ j ( i + 1 ) \pi^{(i+1)} = \frac {1}{n}\sum_{j=1}^n \mu_{j}^{(i+1)} π(i+1)=n1j=1nμj(i+1)

2.接下来求解 p p p:

∂ Q ( θ , θ i ) ∂ p = ∑ j = 1 n μ j ( i + 1 ) [ y j 1 p − ( 1 − y j ( i + 1 ) ) 1 1 − p ] \frac{\partial Q(\theta, \theta^i)}{\partial p} = \sum_{j=1}^n \mu_{j}^{(i+1)} [y_j \frac{1}{p} - (1-y_{j}^{(i+1)}) \frac {1}{1-p}] pQ(θ,θi)=j=1nμj(i+1)[yjp1(1yj(i+1))1p1]

求取极值,令等式右边为0:

∑ j = 1 n μ j ( i + 1 ) [ y j 1 p − ( 1 − y j ( i + 1 ) ) 1 1 − p ] = 0 \sum_{j=1}^n \mu_{j}^{(i+1)} [y_j \frac{1}{p} - (1-y_{j}^{(i+1)}) \frac {1}{1-p}] = 0 j=1nμj(i+1)[yjp1(1yj(i+1))1p1]=0

左右两边同时乘 p ( 1 − p ) p(1-p) p(1p)得到:

∑ j = 1 n μ j ( i + 1 ) [ y j ( 1 − p ) − ( 1 − y j ( i + 1 ) ) p ] = 0 \sum_{j=1}^n \mu_{j}^{(i+1)} [y_j (1-p) - (1-y_{j}^{(i+1)}) p] = 0 j=1nμj(i+1)[yj(1p)(1yj(i+1))p]=0

∑ j = 1 n [ μ j ( i + 1 ) y j − μ j ( i + 1 ) p ] = 0 \sum_{j=1}^n [\mu_{j}^{(i+1)} y_j - \mu_{j}^{(i+1)} p] = 0 j=1n[μj(i+1)yjμj(i+1)p]=0

则:

p ( i + 1 ) = ∑ j = 1 n μ j ( i + 1 ) y j ∑ j = 1 n μ j ( i + 1 ) p^{(i+1)} = \frac {\sum_{j=1}^n \mu_{j}^{(i+1)} y_j}{\sum_{j=1}^n \mu_{j}^{(i+1)}} p(i+1)=j=1nμj(i+1)j=1nμj(i+1)yj

3.最后用同样的方法得到 q q q:

q ( i + 1 ) = ∑ j = 1 n ( 1 − μ j ( i + 1 ) ) y j ∑ j = 1 n ( 1 − μ j ( i + 1 ) ) q^{(i+1)} = \frac {\sum_{j=1}^n (1-\mu_{j}^{(i+1)}) y_j}{\sum_{j=1}^n (1-\mu_{j}^{(i+1)})} q(i+1)=j=1n(1μj(i+1))j=1n(1μj(i+1))yj

1.3.2 参数空间

1.模型参数
π \pi π: 硬币A正面的概率,在此模型中是一个float类型的数值
p p p: 硬币B正面的概率,在此模型中是一个float类型的数值
q q q:硬币C正面的概率,在此模型中是一个float类型的数值
2.隐变量
μ \mu μ: 最后观测值到底来源于B还是C,是一个一维向量
μ = ( μ 1 , μ 2 , . . . , μ n ) \mu=(\mu_1, \mu_2,...,\mu_n) μ=(μ1,μ2,...,μn),其中 μ j \mu_j μj代表第 j j j次抛硬币B的概率。

1.4 EM算法的收敛性

证明EM算法的收敛,只需要证明 p ( y ∣ θ ( i ) ) p(y|\theta^{(i)}) p(yθ(i))是单调递增的即可:

p ( y ∣ θ ( i + 1 ) ) ≥ p ( y ∣ θ ( i ) ) p(y|\theta^{(i+1)}) \ge p(y|\theta^{(i)}) p(yθ(i+1))p(yθ(i))

证明:
由于:

p ( y ∣ θ ) = p ( y , θ ) p ( θ ) p ( y , z , θ ) p ( y , z , θ ) = p ( y , z ∣ θ ) p ( z ∣ y , θ ) p(y|\theta) = \frac {p(y,\theta)}{p(\theta)} \frac {p(y,z,\theta)}{p(y,z,\theta)}= \frac {p(y,z|\theta)}{p(z|y,\theta)} p(yθ)=p(θ)p(y,θ)p(y,z,θ)p(y,z,θ)=p(zy,θ)p(y,zθ)

取对数化简得:

log ⁡ p ( y ∣ θ ( i + 1 ) ) − log ⁡ p ( y ∣ θ ( i ) ) = [ log ⁡ p ( y , z ∣ θ ( i + 1 ) ) − log ⁡ p ( z ∣ y , θ ( i + 1 ) ) ] − [ log ⁡ p ( y , z ∣ θ ( i ) ) − log ⁡ p ( z ∣ y , θ ( i ) ) ] = [ log ⁡ p ( y , z ∣ θ ( i + 1 ) ) − log ⁡ p ( y , z ∣ θ ( i ) ) ] − [ log ⁡ p ( z ∣ y , θ ( i + 1 ) ) − log ⁡ p ( z ∣ y , θ ( i ) ) ] = [ ∑ z p ( z ∣ y , θ ( i + 1 ) ) log ⁡ p ( y , z ∣ θ ( i ) ) − ∑ z p ( z ∣ y , θ i ) log ⁡ p ( y , z ∣ θ ( i ) ) ] − [ ∑ z p ( z ∣ y , θ ( i + 1 ) ) log ⁡ p ( z ∣ y , θ ( i ) ) − ∑ z p ( z ∣ y , θ ( i ) ) log ⁡ p ( z ∣ y , θ ( i ) ) ] \begin{aligned} &\log p(y|\theta^{(i+1)}) - \log p(y|\theta^{(i)}) \\ &= [\log p(y, z|\theta^{(i+1)}) - \log p(z|y,\theta^{(i+1)})] - [\log p(y, z|\theta^{(i)})- \log p(z|y,\theta^{(i)})]\\ &= [\log p(y, z|\theta^{(i+1)}) - \log p(y, z|\theta^{(i)})] - [\log p(z|y,\theta^{(i+1)})- \log p(z|y,\theta^{(i)})]\\ &= [\sum_z p(z|y,\theta^{(i+1)}) \log p(y, z|\theta^{(i)}) - \sum_z p(z|y,\theta^{i})\log p(y, z|\theta^{(i)})] -\\ & [\sum_z p(z|y,\theta^{(i+1)})\log p(z|y,\theta^{(i)})- \sum_z p(z|y,\theta^{(i)}) \log p(z|y,\theta^{(i)})]\\ \end{aligned} logp(yθ(i+1))logp(yθ(i))=[logp(y,zθ(i+1))logp(zy,θ(i+1))][logp(y,zθ(i))logp(zy,θ(i))]=[logp(y,zθ(i+1))logp(y,zθ(i))][logp(zy,θ(i+1))logp(zy,θ(i))]=[zp(zy,θ(i+1))logp(y,zθ(i))zp(zy,θi)logp(y,zθ(i))][zp(zy,θ(i+1))logp(zy,θ(i))zp(zy,θ(i))logp(zy,θ(i))]

前两项有 Q ( θ ( i + 1 ) , θ ( i ) ) − Q ( θ ( i ) , θ ( i ) ) ≥ 0 Q(\theta^{(i+1)}, \theta^{(i)})- Q(\theta^{(i)}, \theta^{(i)}) \ge 0 Q(θ(i+1),θ(i))Q(θ(i),θ(i))0,对后两项进行计算:

∑ z p ( z ∣ y , θ ( i + 1 ) ) log ⁡ p ( z ∣ y , θ ( i ) ) − ∑ z p ( z ∣ y , θ ( i ) ) log ⁡ p ( z ∣ y , θ ( i ) ) = ∑ z log ⁡ [ p ( z ∣ y , θ ( i + 1 ) ) p ( z ∣ y , θ ( i ) ) ] p ( z ∣ y , θ ( i ) ) ≤ log ⁡ ∑ z p ( z ∣ y , θ ( i + 1 ) ) p ( z ∣ y , θ ( i ) ) p ( z ∣ y , θ ( i ) ) = log ⁡ [ ∑ z p ( z ∣ y , θ ( i + 1 ) ) ] = 0 \begin{aligned} &\sum_z p(z|y,\theta^{(i+1)})\log p(z|y,\theta^{(i)})- \sum_z p(z|y,\theta^{(i)}) \log p(z|y,\theta^{(i)}) \\ &=\sum_z \log [\frac { p(z|y,\theta^{(i+1)}) } { p(z|y,\theta^{(i)})}] p(z|y,\theta^{(i)}) \\ & \le \log \sum_z \frac { p(z|y,\theta^{(i+1)}) } { p(z|y,\theta^{(i)})} p(z|y,\theta^{(i)}) \\ & = \log [\sum_z p(z|y,\theta^{(i+1)}) ] \\ =0 \end{aligned} =0zp(zy,θ(i+1))logp(zy,θ(i))zp(zy,θ(i))logp(zy,θ(i))=zlog[p(zy,θ(i))p(zy,θ(i+1))]p(zy,θ(i))logzp(zy,θ(i))p(zy,θ(i+1))p(zy,θ(i))=log[zp(zy,θ(i+1))]

也即后面两项小于等于0,所以 log ⁡ p ( y ∣ θ ( i + 1 ) ) − log ⁡ p ( y ∣ θ ( i ) ) ≥ 0 \log p(y|\theta^{(i+1)}) - \log p(y|\theta^{(i)}) \ge 0 logp(yθ(i+1))logp(yθ(i))0
得证。

2 三银币模型的Python实现

2.1 模型实现
import numpy as np
np.random.seed(0)


class ThreeCoinsMode(object):
    def __init__(self, n_epoch=5):
        """
        运用EM算法求解三银币模型
        :param n_epoch: 迭代次数
        """
        self.n_epoch = n_epoch
        self.params = {'pi': None, 'p': None, 'q': None, 'mu': None}

    def __init_params(self, n):
        """
        对参数初始化操作
        :param n: 观测样本个数
        :return: 
        """
        self.params = {'pi': np.random.rand(1),
                       'p': np.random.rand(1),
                       'q': np.random.rand(1),
                       'mu': np.random.rand(n)}
        # self.params = {'pi': [0.5],
        #                'p': [0.5],
        #                'q': [0.5],
        #                'mu': np.random.rand(n)}

    def E_step(self, y, n):
        """
        E步:跟新隐变量mu
        :param y: 观测样本
        :param n: 观测样本个数
        :return: 
        """
        pi = self.params['pi'][0]
        p = self.params['p'][0]
        q = self.params['q'][0]
        for i in range(n):
            self.params['mu'][i] = (pi * pow(p, y[i]) * pow(1-p, 1-y[i])) / (pi * pow(p, y[i]) * pow(1-p, 1-y[i]) + (1-pi) * pow(q, y[i]) * pow(1-q, 1-y[i]))

    def M_step(self, y, n):
        """
        M步:跟新模型参数
        :param y: 观测样本
        :param n: 观测样本个数
        :return: 
        """
        mu = self.params['mu']
        self.params['pi'][0] = sum(mu) / n
        self.params['p'][0] = sum([mu[i] * y[i] for i in range(n)]) / sum(mu)
        self.params['q'][0] = sum([(1-mu[i]) * y[i] for i in range(n)]) / sum([1-mu_i for mu_i in mu])

    def fit(self, y):
        """
        模型入口
        :param y: 观测样本
        :return: 
        """
        n = len(y)
        self.__init_params(n)
        print(0, self.params['pi'], self.params['p'], self.params['q'])
        for i in range(self.n_epoch):
            self.E_step(y, n)
            self.M_step(y, n)
            print(i+1, self.params['pi'], self.params['p'], self.params['q'])
2.2 模型测试结果
def run_three_coins_model():
    y = [1, 1, 0, 1, 0, 0, 1, 0, 1, 1]
    tcm = ThreeCoinsMode()
    tcm.fit(y)

运行结果:

0 [0.5488135] [0.71518937] [0.60276338]
1 [0.54076424] [0.65541668] [0.53474516]
2 [0.54076424] [0.65541668] [0.53474516]
3 [0.54076424] [0.65541668] [0.53474516]
4 [0.54076424] [0.65541668] [0.53474516]
5 [0.54076424] [0.65541668] [0.53474516]

参考文献:
《统计学习方法》 李航著

你可能感兴趣的:(机器学习,NLP,python,nlp,机器学习,人工智能)