解答:概率估计公式(4.8)是:
P ( Y = c k ) = ∑ i = 1 N I ( y i = c k ) N , k = 1 , 2 , . . . , K P(Y = c_{k}) = \frac{\sum_{i=1}^{N}I(y_{i} = c_{k}) }{N},k = 1,2,...,K P(Y=ck)=N∑i=1NI(yi=ck),k=1,2,...,K
概率估计公式(4.9)是:
P ( X j = a j l ∣ Y = c k ) = ∑ i = 1 N I ( x i j = a j l , y i = c k ) ∑ i = 1 N I ( y i = c k ) j = 1 , 2 , . . . , n ; l = 1 , 2 , . . . , S j ; k = 1 , 2 , . . . , K P(X^{j} = a_{jl}|Y = c_{k}) = \frac{\sum_{i=1}^{N}I(x_{i}^{j} = a_{jl}, y_{i} = c_{k}) }{\sum_{i=1}^{N}I(y_{i} = c_{k}) } \\j = 1,2,...,n;l = 1,2,...,S_{j}; k = 1,2,...,K P(Xj=ajl∣Y=ck)=∑i=1NI(yi=ck)∑i=1NI(xij=ajl,yi=ck)j=1,2,...,n;l=1,2,...,Sj;k=1,2,...,K
假设我们得到的训练数据集是:
T = { ( x 1 , y 1 ) , ( x 2 , y 2 ) , . . . , ( x N , y N ) } T = \{(x_{1},y_{1}),(x_{2},y_{2}),...,(x_{N},y_{N})\} T={ (x1,y1),(x2,y2),...,(xN,yN)}
首先,我们设
P ( Y = c k ) = θ k (1) P(Y = c_{k}) = \theta _{k}\tag{1} P(Y=ck)=θk(1)
那么由公式(1)我们可得:
P ( Y ≠ c k ) = 1 − θ k (2) P(Y \ne c_{k}) =1- \theta _{k}\tag{2} P(Y=ck)=1−θk(2)
现在假设训练集 T T T 中的类别为 c k c_{k} ck 的数量是 n k n_{k} nk,那么我们可以得到极大似然估计的似然函数是
P ( y 1 , y 2 , . . . , y N ∣ θ k ) = ∏ i = 1 N P ( y i ∣ θ k ) = ( θ k ) n k ( 1 − θ k ) N − n k (3) P(y_{1},y_{2},...,y_{N}|\theta _{k}) \\= \prod_{i=1}^{N}P(y_{i}|\theta _{k}) =(\theta _{k})^{n_{k}}(1-\theta _{k})^{N-n_{k}}\tag{3} P(y1,y2,...,yN∣θk)=i=1∏NP(yi∣θk)=(θk)nk(1−θk)N−nk(3)
为了方便计算,我们对公式(3)取对数,得到对数似然函数
L n P ( y 1 , y 2 , . . . , y N ∣ θ k ) = L n ( θ k ) n k ( 1 − θ k ) N − n k = n k L n θ k + ( N − n k ) L n ( 1 − θ k ) (4) Ln P(y_{1},y_{2},...,y_{N}|\theta _{k}) =Ln (\theta _{k})^{n_{k}}(1-\theta _{k})^{N-n_{k}}=n_{k}Ln\theta _{k}+(N-n_{k})Ln (1-\theta _{k})\tag{4} LnP(y1,y2,...,yN∣θk)=Ln(θk)nk(1−θk)N−nk=nkLnθk+(N−nk)Ln(1−θk)(4)
对公式(4)关于 θ k \theta_{k} θk求导数,并且令导数为0,所以有
∂ L n P ( y 1 , y 2 , . . . , y N ∣ θ k ) ∂ θ k = n k θ k − N − n k 1 − θ k = 0 (5) \frac{\partial Ln\ P(y_{1},y_{2},...,y_{N}|\theta _{k})}{\partial \theta _{k}} =\frac{n_{k}}{\theta _{k}} - \frac{N-n_{k}}{1-\theta _{k}}=0\tag{5} ∂θk∂Ln P(y1,y2,...,yN∣θk)=θknk−1−θkN−nk=0(5)
根据等式(5)解得
θ k = n k N = ∑ i = 1 N I ( y i = c k ) N , k = 1 , 2 , . . . , K \theta_{k} = \frac{n_{k}}{N} = \frac{\sum_{i=1}^{N}I(y_{i} = c_{k}) }{N},k = 1,2,...,K θk=Nnk=N∑i=1NI(yi=ck),k=1,2,...,K
所以概率估计公式(4.8)得证。其实,还要验证一下驻点是极大值点,但是显然是存在极大值的,我就省略了,老铁们!!!
我们先将条件概率转化为联合概率
P ( X j = a j l ∣ Y = c k ) = P ( X j = a j l , Y = c k ) P ( Y = c k ) (6) P(X^{j} = a_{jl}|Y=c_{k}) = \frac{P(X^{j} = a_{jl},Y = c_{k})}{P(Y=c_{k})} \tag{6} P(Xj=ajl∣Y=ck)=P(Y=ck)P(Xj=ajl,Y=ck)(6)
因为我们在1.1小节,已经推导出了 P ( Y = c k ) P(Y=c_{k}) P(Y=ck),那么我们现在只需要对 P ( X j = a j l , Y = c k ) P(X^{j} = a_{jl},Y = c_{k}) P(Xj=ajl,Y=ck)进行估计即可。
同样的,我们假设
P ( X j = a j l , Y = c k ) = θ (7) P(X^{j} = a_{jl},Y=c_{k}) = \theta\tag{7} P(Xj=ajl,Y=ck)=θ(7)
那么使得 ( X j = a j l , Y = c k ) (X^{j} = a_{jl},Y=c_{k}) (Xj=ajl,Y=ck)中任何一个等号不成立的概率就是 1 − θ 1-\theta 1−θ
所以,我们根据训练数据集 T T T可以得到似然函数是
P ( ( x 1 j , y 1 ) , ( x 2 j , y 2 ) , . . . ( x 1 j , y N ) ) = ∏ i = 1 N P ( x i j , y i ) = θ n ( 1 − θ ) N − n (8) P((x_{1}^{j},y_{1}),(x_{2}^{j},y_{2}),...(x_{1}^{j},y_{N})) \\= \prod_{i=1}^{N}P(x_{i}^{j},y_{i})=\theta^{n}(1-\theta)^{N-n}\tag{8} P((x1j,y1),(x2j,y2),...(x1j,yN))=i=1∏NP(xij,yi)=θn(1−θ)N−n(8)
其中, n n n是使 X j = a j l , Y = c k X^{j} = a_{jl},Y=c_{k} Xj=ajl,Y=ck成立的数据的个数,也就是 n = ∑ i = 1 N I ( x i j = a j l , y i = c k ) n = \sum_{i=1}^{N}I(x_{i}^{j} = a_{jl}, y_{i} = c_{k}) n=∑i=1NI(xij=ajl,yi=ck)
同样的为了方便计算,我们对公式(8)取自然对数得到对数似然函数,然后再关于 θ \theta θ求导数得到驻点,验证下为唯一的极大值,就可以得到
θ = ∑ i = 1 N I ( x i j = a j l , y i = c k ) N \theta = \frac{\sum_{i=1}^{N}I(x_{i}^{j} = a_{jl}, y_{i} = c_{k})}{N} θ=N∑i=1NI(xij=ajl,yi=ck)
然后联合公式(4.8),就可以得到(4.9).
解答:
训练集仍然是
T = { ( x 1 , y 1 ) , ( x 2 , y 2 ) , . . . , ( x N , y N ) } T = \{(x_{1},y_{1}),(x_{2},y_{2}),...,(x_{N},y_{N})\} T={ (x1,y1),(x2,y2),...,(xN,yN)}
首先给出(4.11)的表达式
P λ ( Y = c k ) = ∑ i = 1 N I ( y i = c k ) + λ N + K λ (4.11) P_{\lambda}(Y=c_{k}) = \frac{\sum_{i=1}^{N}I(y_{i}=c_{k}) +\lambda}{N+K\lambda } \tag{4.11} Pλ(Y=ck)=N+Kλ∑i=1NI(yi=ck)+λ(4.11)
首先我们假设
P λ ( Y = c k ) = θ (1) P_{\lambda}(Y=c_{k}) = \theta\tag{1} Pλ(Y=ck)=θ(1)
接下来我们使用贝叶斯估计法估计参数 θ \theta θ 的取值。
我们知道概率的取值是[0,1],因而我们假设 θ \theta θ的先验分布的概率密度函数是均匀分布 p ( θ ) = 1 p(\theta) = 1 p(θ)=1
我们要使用数据集 T T T来估计随机变量 θ \theta θ 的概率分布。
贝叶斯估计的思想就是先提前给出 θ \theta θ的一个先验分布 p ( θ ) p(\theta) p(θ),接着根据训练数据集 T T T 来进行修正这个先验分布 p ( θ ) p(\theta) p(θ).
由贝叶斯估计算法,我们可以得到以下的修正方法
P ( θ ∣ T ) = P ( θ , T ) P ( T ) = P ( θ ) P ( T ∣ θ ) P ( T ) (2) P(\theta|T) = \frac{P(\theta ,T)}{P(T)} =\frac{P(\theta )P(T|\theta )}{P(T)} \tag{2} P(θ∣T)=P(T)P(θ,T)=P(T)P(θ)P(T∣θ)(2)
但是由于 P ( T ) P(T) P(T)的取值是固定的,虽然我们不知道,我们不去计算这一项,而且我们还要使用极大化后验概率来给出一个 θ \theta θ的具体的值。
所以有
P ( θ ∣ T ) ∝ P ( θ ) P ( T ∣ θ ) = P ( T ∣ θ ) (3) P(\theta |T) \propto P(\theta )P(T|\theta )= P(T|\theta )\tag{3} P(θ∣T)∝P(θ)P(T∣θ)=P(T∣θ)(3)
其实也就是将贝叶斯估计进行极大似然的估计
P ( Y ≠ c k ) = 1 − θ (4) P(Y \ne c_{k}) = 1- \theta\tag{4} P(Y=ck)=1−θ(4)
所以
P ( y 1 , y 2 , . . . , y N ∣ θ ) = θ n k + λ ( 1 − θ ) N − n k + K λ (5) P(y_{1},y_{2},...,y_{N}|\theta) = \theta^{n_{k}+\lambda}(1-\theta)^{N-n_{k}+K\lambda}\tag{5} P(y1,y2,...,yN∣θ)=θnk+λ(1−θ)N−nk+Kλ(5)
其中 n k n_{k} nk是训练数据集中类标记 c k c_{k} ck的数量。
如果使用极大化后验概率的的话,我们就得到
θ ′ = a r g m a x θ P ( y 1 , y 2 , . . . , y N ∣ θ ) = a r g m a x θ θ n k + λ ( 1 − θ ) N − n k + K λ (6) {\theta}' = \underset{\theta }{argmax} P(y_{1},y_{2},...,y_{N}|\theta) \\= \underset{\theta }{argmax}\ \theta^{n_{k}+\lambda}(1-\theta)^{N-n_{k}+K\lambda}\tag{6} θ′=θargmaxP(y1,y2,...,yN∣θ)=θargmax θnk+λ(1−θ)N−nk+Kλ(6)
在公式(6)的最会一部分取对数,然后极大化,可以得到
θ = n k + λ N + K λ = ∑ i = 1 N I ( y i = c k ) + λ N + K λ (7) \theta = \frac{n_{k}+\lambda }{N+K\lambda } = \frac{\sum_{i=1}^{N}I(y_{i}=c_{k}) +\lambda }{N+K\lambda } \tag{7} θ=N+Kλnk+λ=N+Kλ∑i=1NI(yi=ck)+λ(7)
得证!!!
这里直接从条件概率入手,设训练集 T T T 中类标记 c k c_{k} ck的数量为 n c k n_{c_{k}} nck,我们将数据集 T T T中类标记是 c k c_{k} ck的数据挑选出来构造新的数据集 T c k = { ( x 1 c k , c k ) , ( x 2 c k , c k ) , . . . , ( x n c k c k , c k ) } T_{c_{k}} = \{(x_{1c_{k}},c_{k}),(x_{2c_{k}},c_{k}),...,(x_{n_{c_{k}}c_{k}},c_{k})\} Tck={ (x1ck,ck),(x2ck,ck),...,(xnckck,ck)}
根据条件概率的性质我们有下式成立
P λ ( X j = a j l ∣ Y = c k , T ) = P λ ( X j = a j l ∣ T c k ) (1) P_{\lambda}(X^{j} = a_{jl}|Y = c_{k},T)=P_{\lambda}(X^{j} = a_{jl}|T_{c_{k}})\tag{1} Pλ(Xj=ajl∣Y=ck,T)=Pλ(Xj=ajl∣Tck)(1)
假设
P λ ( X j = a j l ∣ T c k ) = θ (2) P_{\lambda}(X^{j} = a_{jl}|T_{c_{k}}) = \theta\tag{2} Pλ(Xj=ajl∣Tck)=θ(2)
我们还是假设 θ \theta θ的先验分布是均匀分布,概率密度函数是 p ( θ ) = 1 p(\theta) = 1 p(θ)=1
所以有:
P ( θ ∣ T c k ) = P ( T c k ∣ θ ) P ( θ ) P ( T c k ) ∝ P ( T c k ∣ θ ) P ( θ ) (3) P(\theta|T_{c_{k}}) = \frac{P(T_{c_{k}}|\theta )P(\theta )}{P(T_{c_{k}})} \propto P(T_{c_{k}}|\theta )P(\theta )\tag{3} P(θ∣Tck)=P(Tck)P(Tck∣θ)P(θ)∝P(Tck∣θ)P(θ)(3)
还是和公式(4.11)的证明一样,我们还是使用后验概率极大化得到一个具体的 θ \theta θ来表示。
所以
θ ′ = a r g m a x θ θ n + λ ( 1 − θ ) n c k + S j λ − n − λ (4) {\theta}' = \underset{\theta }{argmax}\ \theta ^{n+\lambda }(1-\theta )^{n_{c_{k}}+S_{j}\lambda -n-\lambda }\tag{4} θ′=θargmax θn+λ(1−θ)nck+Sjλ−n−λ(4)
其中 n n n表示使 x i j = a j l x_{i}^{j} = a_{jl} xij=ajl成立的数量,也即是 n = ∑ i = 1 N I ( x i j = a j l , y i = c k ) (5) n = \sum_{i=1}^{N}I(x_{i}^{j} = a_{jl},y_{i}=c_{k})\tag{5} n=i=1∑NI(xij=ajl,yi=ck)(5)
n c k = ∑ i = 1 N I ( y i = c k ) (6) n_{c_{k}} = \sum_{i=1}^{N}I(y_{i}= c_{k}) \tag{6} nck=i=1∑NI(yi=ck)(6)
将公式(4)转化为对数求解,可以得到
θ ′ = n + λ n c k + S j λ (7) {\theta}'=\frac{n+\lambda }{n_{c_{k}}+S_{j}\lambda }\tag{7} θ′=nck+Sjλn+λ(7)
将公式(5)和公式(6)带入到公式(7)所以得到
P λ ( X j = a j l ∣ Y = c k , T ) = ∑ i = 1 N I ( x i j = a j l , y i = c k ) + λ ∑ i = 1 N I ( y i = c k ) + S j λ P_{\lambda}(X^{j} = a_{jl}|Y = c_{k},T) = \frac{\sum_{i=1}^{N}I(x_{i}^{j} = a_{jl},y_{i}=c_{k})+\lambda }{\sum_{i=1}^{N}I(y_{i}= c_{k})+S_{j}\lambda } Pλ(Xj=ajl∣Y=ck,T)=∑i=1NI(yi=ck)+Sjλ∑i=1NI(xij=ajl,yi=ck)+λ
得证!!!