贝叶斯定理是关于随机事件A和B的条件概率(或边缘概率)的一则定理。其中P(A|B)是在B发生的情况下A发生的可能性。
数学推导上,贝叶斯公式是由全概率公式与乘法公式推导而来,推导过程如下:
由条件概率可得: P ( B i ∣ A ) = P ( A B i ) P ( A ) P\left(B_{i} | A\right)=\frac{P\left(A B_{i}\right)}{P(A)} P(Bi∣A)=P(A)P(ABi),对分子使用乘法公式,分母使用全概率公式得:
P ( A B i ) = P ( B i ) P ( A ∣ B i ) P\left(A B_{i}\right)=P\left(B_{i}\right) P\left(A | B_{i}\right) P(ABi)=P(Bi)P(A∣Bi)
P ( A ) = ∑ i = 1 n P ( B j ) P ( A ∣ B j ) P(A)=\sum_{i=1}^{n} P\left(B_{j}\right) P\left(A | B_{j}\right) P(A)=∑i=1nP(Bj)P(A∣Bj)
代入原式中就可到得到贝叶斯公式:
P ( B ∣ A ) = P ( B i ) P ( A ∣ B i ) ∑ i = 1 n P ( B j ) P ( A ∣ B j ) P(B|A)=\frac{P\left(B_{i}\right) P\left(A | B_{i}\right)}{\sum_{i=1}^{n} P\left(B_{j}\right) P\left(A | B_{j}\right)} P(B∣A)=∑i=1nP(Bj)P(A∣Bj)P(Bi)P(A∣Bi)
朴素贝叶斯法是是基于贝叶斯定理与特征条件独立假设的分 类方法。
符号说明:输出类空间为: y = { c 1 , c 2 , ⋯ , c K } y=\left\{c_{1}, c_{2}, \cdots, c_{K}\right\} y={c1,c2,⋯,cK} 输入的特征向量为x
在进行理论推导,需要条件分布概率满足以下的条件对立性假设:
P ( X = x ∣ Y = c k ) = P ( X ( 1 ) = x ( 1 ) , ⋯ , X ( n ) = x ( n ) ∣ Y = c k ) = ∏ j = 1 n P ( X ( j ) = x ( j ) ∣ Y = c k ) \begin{aligned} P\left(X=x | Y=c_{k}\right) =P\left(X^{(1)}=x^{(1)}, \cdots, X^{(n)}=x^{(n)} | Y=c_{k}\right) =\prod_{j=1}^{n} P\left(X^{(j)}=x^{(j)} | Y=c_{k}\right) \end{aligned} P(X=x∣Y=ck)=P(X(1)=x(1),⋯,X(n)=x(n)∣Y=ck)=j=1∏nP(X(j)=x(j)∣Y=ck)(1)
在利用朴素贝叶斯法进行分类是,对于输入的x,我们一般通过学习得到的模型来计算后验的概率分布: P ( Y = c k ∣ X = x ) P\left( Y=c_{k}| X=x\right) P(Y=ck∣X=x),将得到的概率最大类作为x的类输出。后验概率的计算依据的贝叶斯公式:
P ( Y = c k ∣ X = x ) = P ( X = x ∣ Y = c k ) P ( Y = c k ) ∑ k P ( X = x ∣ Y = c k ) P ( Y = c k ) P\left(Y=c_{k} | X=x\right)=\frac{P\left(X=x | Y=c_{k}\right) P\left(Y=c_{k}\right)}{\sum_{k} P\left(X=x | Y=c_{k}\right) P\left(Y=c_{k}\right)} P(Y=ck∣X=x)=∑kP(X=x∣Y=ck)P(Y=ck)P(X=x∣Y=ck)P(Y=ck)(2)
由式子(1)(2)可得:
P ( Y = c k ∣ X = x ) = P ( Y = c k ) ∏ j P ( X ( j ) = x ( j ) ∣ Y = c k ) ∑ k P ( Y = c k ) ∏ j P ( X ( j ) = x ( j ) ∣ Y = c k ) , k = 1 , 2 , ⋯ , K P\left(Y=c_{k} | X=x\right)=\frac{P\left(Y=c_{k}\right) \prod_{j} P\left(X^{(j)}=x^{(j)} | Y=c_{k}\right)}{\sum_{k} P\left(Y=c_{k}\right) \prod_{j} P\left(X^{(j)}=x^{(j)} | Y=c_{k}\right)}, \quad k=1,2, \cdots, K P(Y=ck∣X=x)=∑kP(Y=ck)∏jP(X(j)=x(j)∣Y=ck)P(Y=ck)∏jP(X(j)=x(j)∣Y=ck),k=1,2,⋯,K
于是分类器的最终结果可以表示为:
y = f ( x ) = arg max c k P ( Y = c k ) ∏ j P ( X ( j ) = x ( j ) ∣ Y = c k ) ∑ k P ( Y = c k ) ∏ j P ( X ( j ) = x ( j ) ∣ Y = c k ) y=f(x)=\arg \max _{c_{k}} \frac{P\left(Y=c_{k}\right) \prod_{j} P\left(X^{(j)}=x^{(j)} | Y=c_{k}\right)}{\sum_{k} P\left(Y=c_{k}\right) \prod_{j} P\left(X^{(j)}=x^{(j)} | Y=c_{k}\right)} y=f(x)=argmaxck∑kP(Y=ck)∏jP(X(j)=x(j)∣Y=ck)P(Y=ck)∏jP(X(j)=x(j)∣Y=ck)(3)
不难看出,(3)式的分母结果是相同的,故分类器结果又可等效为: y = arg max c k P ( Y = c k ) ∏ j P ( X ( j ) = x ( j ) ∣ Y = c k ) y=\arg\max_{c_{k}}P\left(Y=c_{k}\right)\prod_{j}P\left(X^{(j)}=x^{(j)} | Y=c_{k}\right) y=argmaxckP(Y=ck)∏jP(X(j)=x(j)∣Y=ck)
对于期望风险最小化的推导此处省略
在朴素贝叶斯法中,关于估计概率的方法,主要有两种1、极大似然估计;2、贝叶斯估计。这里笔者直接给出公式,不做证明。
先验 P ( Y = C k ) P(Y = C_k) P(Y=Ck) 的极大似然估计是:
P ( Y = c k ) = ∑ i = 1 N I ( y i = c k ) N , k = 1 , 2 , ⋯ , K P\left(Y=c_{k}\right)=\frac{\sum_{i=1}^{N} I\left(y_{i}=c_{k}\right)}{N}, \quad k=1,2, \cdots, K P(Y=ck)=N∑i=1NI(yi=ck),k=1,2,⋯,K
假设第 个特征 x ( j ) x^(j) x(j) 可能取值的集合为 α j 1 a j 2 . . . a j S j {α_{j1} a_{j2} . .. a_jS_j } αj1aj2...ajSj,条件概率 P ( X ( j ) = a j l ∣ Y = c k ) P\left(X^{(j)}=a_{j l} | Y=c_k)\right. P(X(j)=ajl∣Y=ck) 的极大似然估计是:
P ( X ( j ) = a j l ∣ Y = c k ) = ∑ i = 1 N I ( x i ( j ) = a j l , y i = c k ) ∑ i = 1 N I ( y i = c k ) P\left(X^{(j)}=a_{j l} | Y=c_{k}\right)=\frac{\sum_{i=1}^{N} I\left(x_{i}^{(j)}=a_{j l}, y_{i}=c_{k}\right)}{\sum_{i=1}^{N} I\left(y_{i}=c_{k}\right)} P(X(j)=ajl∣Y=ck)=∑i=1NI(yi=ck)∑i=1NI(xi(j)=ajl,yi=ck)
j = 1 , 2 , ⋯ , n ; l = 1 , 2 , ⋯ , S j ; k = 1 , 2 , ⋯ , K j=1,2, \cdots, n ; \quad l=1,2, \cdots, S_{j} ; \quad k=1,2, \cdots, K j=1,2,⋯,n;l=1,2,⋯,Sj;k=1,2,⋯,K
极大似然估计虽然可以得到已知情况下得到概率估计值,但是会出出现概率值为0的情况,进而影响到之后的分类计算结果,贝叶斯估计可以解决这一问题,其原式引入 λ 大 于 或 等 于 0 ( 一 般 取 1 ) \lambda大于或等于0(一般取1) λ大于或等于0(一般取1),得到条件概率的贝叶斯估计为:
P λ ( X ( j ) = a j l ∣ Y = c k ) = ∑ i = 1 N I ( x i ( j ) = a j l , y i = c k ) + λ ∑ i = 1 N I ( y i = c k ) + S j λ P_{\lambda}\left(X^{(j)}=a_{j l} | Y=c_{k}\right)=\frac{\sum_{i=1}^{N} I\left(x_{i}^{(j)}=a_{j l}, y_{i}=c_{k}\right)+\lambda}{\sum_{i=1}^{N} I\left(y_{i}=c_{k}\right)+S_{j} \lambda} Pλ(X(j)=ajl∣Y=ck)=∑i=1NI(yi=ck)+Sjλ∑i=1NI(xi(j)=ajl,yi=ck)+λ
同理,得到的先验概率的贝叶斯概率估计为:
step1:输入训练数据 T = { ( x 1 , y 1 ) , ( x 2 , y 2 ) , ⋯ , ( x N , y N ) } T=\left\{\left(x_{1}, y_{1}\right),\left(x_{2}, y_{2}\right), \cdots,\left(x_{N}, y_{N}\right)\right\} T={(x1,y1),(x2,y2),⋯,(xN,yN)}基于给定的数据模型得到相对应的概率参数估计(先验概率及条件分布概率):
P ( Y = c k ) = ∑ i = 1 N I ( y i = c k ) N , k = 1 , 2 , ⋯ , K P\left(Y=c_{k}\right)=\frac{\sum_{i=1}^{N} I\left(y_{i}=c_{k}\right)}{N}, \quad k=1,2, \cdots, K P(Y=ck)=N∑i=1NI(yi=ck),k=1,2,⋯,K
P ( X ( j ) = a j l ∣ Y = c k ) = ∑ i = 1 N I ( x i ( j ) = a j l , y i = c k ) ∑ i = 1 N I ( y i = c k ) P\left(X^{(j)}=a_{j l} | Y=c_{k}\right)=\frac{\sum_{i=1}^{N} I\left(x_{i}^{(j)}=a_{j l}, y_{i}=c_{k}\right)}{\sum_{i=1}^{N} I\left(y_{i}=c_{k}\right)} P(X(j)=ajl∣Y=ck)=∑i=1NI(yi=ck)∑i=1NI(xi(j)=ajl,yi=ck)
j = 1 , 2 , ⋯ , n ; l = 1 , 2 , ⋯ , S j ; k = 1 , 2 , ⋯ , K j=1,2, \cdots, n ; \quad l=1,2, \cdots, S_{j} ; \quad k=1,2, \cdots, K j=1,2,⋯,n;l=1,2,⋯,Sj;k=1,2,⋯,K
step2:输入需要进行分类的实例x,计算其对应的概率分布:
P ( Y = c k ∣ X = x ) = P ( Y = c k ) ∏ j P ( X ( j ) = x ( j ) ∣ Y = c k ) ∑ k P ( Y = c k ) ∏ j P ( X ( j ) = x ( j ) ∣ Y = c k ) , k = 1 , 2 , ⋯ , K P\left(Y=c_{k} | X=x\right)=\frac{P\left(Y=c_{k}\right) \prod_{j} P\left(X^{(j)}=x^{(j)} | Y=c_{k}\right)}{\sum_{k} P\left(Y=c_{k}\right) \prod_{j} P\left(X^{(j)}=x^{(j)} | Y=c_{k}\right)}, \quad k=1,2, \cdots, K P(Y=ck∣X=x)=∑kP(Y=ck)∏jP(X(j)=x(j)∣Y=ck)P(Y=ck)∏jP(X(j)=x(j)∣Y=ck),k=1,2,⋯,K
step3:输出x的类:
y = arg max c k P ( Y = c k ) ∏ j P ( X ( j ) = x ( j ) ∣ Y = c k ) y=\arg \max _{c_{k}} P\left(Y=c_{k}\right) \prod_{j} P\left(X^{(j)}=x^{(j)} | Y=c_{k}\right) y=argmaxckP(Y=ck)∏jP(X(j)=x(j)∣Y=ck)
内容主要取之李航老师的《统计学习方法》章节4,在这里,贝叶斯法的推导都是基于设输入变量都是条件独立的假设下的,而如果设它们之间存在概率依存关系,模型就变成了贝叶斯网络,这需要后续继续的研究与学习。
不论是书、历史文档、社会媒体、电子邮件还是其他以文字为主的通信方式,都包含大量信息。从文本数据集抽取特征,用于分类不是件容易事。然而,人们还是总结出了文本挖掘的通用方法。
笔者在此处班门弄斧,使用强大却出奇简单的朴素贝叶斯算法来解决几个实际的问题,朴素贝叶斯算法在计算用于分类的概率时,为简化计算,假定各特征之间是相互独立的。
此处主要使用的工具是python 借助的库为自然语言处理库:NLTK及其当中的数据集 。
利用NLTK库的性别数据库与朴素贝叶斯算法进行性别的二分类,主要原理为:使用用到启发式方法,即姓名的最后几个字符可以界定性别特征。例如,如果某一个名字以“la”结尾,那么它很有可能是一个女性名字,如“Angela”或者“Layla”。另外,如果一个名字以“im”结尾,那么它很有可能是一个
男性名字,例如“Tim”或者“Jim”。确定需要用到几个字符来确定性别后,可以来做这个实验。接下来介绍如何识别性别。
import random
from nltk.corpus import names
from nltk import NaiveBayesClassifier
from nltk.classify import accuracy as nltk_accuracy
# 提取输入单词的特征
def gender_features(word, num_letters=2):
return {'feature': word[-num_letters:].lower()}
if __name__=='__main__':
# 提取特征名称
labeled_names = ([(name, 'male') for name in names.words('male.txt')] +
[(name, 'female') for name in names.words('female.txt')])
random.seed(7)
random.shuffle(labeled_names)
input_names = ['Leonardo', 'Amy', 'Sam']
# 搜索参数空间
for i in range(1, 5):
print('\nNumber of letters:', i)
featuresets = [(gender_features(n, i), gender) for (n, gender) in labeled_names]
train_set, test_set = featuresets[500:], featuresets[:500]
classifier = NaiveBayesClassifier.train(train_set)
# 打印分类器准确性
print('Accuracy ==>', str(100 * nltk_accuracy(classifier, test_set)) + str('%'))
# 预测新输入的结果
for name in input_names:
print(name, '==>', classifier.classify(gender_features(name, i)))
Number of letters: 1
Accuracy ==> 76.2%
Leonardo ==> male
Amy ==> female
Sam ==> male
Number of letters: 2
Accuracy ==> 78.60000000000001%
Leonardo ==> male
Amy ==> female
Sam ==> male
Number of letters: 3
Accuracy ==> 76.6%
Leonardo ==> male
Amy ==> female
Sam ==> female
Number of letters: 4
Accuracy ==> 70.8%
Leonardo ==> male
Amy ==> female
Sam ==> female
情感分析是指确定一段给定的文本是积极还是消极的过程。有一些场景中,我们还会将“中性”作为第三个选项。情感分析常用于发现人们对于一个特定主题的看法。情感分析用于分析很多场景中用户的情绪,如营销活动、社交媒体、电子商务客户等
主要原理:将用NLTK的朴素贝叶斯分类器进行分类。在特征提取函数中,基本上提取了所有的唯一单词。然而,NLTK分类器需要的数据是用字典的格式存放的,因此这里用到了字典格式,便于NLTK分类器对象读取该数据。将数据分成训练数据集和测试数据集后,可以训练该分类器,以便将句子分为积极和消极。
如果查看最有信息量的那些单词,可以看到例如单词“outstanding”表示积极评论,而“insulting”表示消极评论。这是非常有趣的信息,因为它表明单词可以用来表示情绪。
import nltk.classify.util
from nltk.classify import NaiveBayesClassifier
from nltk.corpus import movie_reviews
def extract_features(word_list):
return dict([(word, True) for word in word_list])
if __name__=='__main__':
# 加载积极与消极评论
positive_fileids = movie_reviews.fileids('pos')
negative_fileids = movie_reviews.fileids('neg')
features_positive = [(extract_features(movie_reviews.words(fileids=[f])),
'Positive') for f in positive_fileids]
features_negative = [(extract_features(movie_reviews.words(fileids=[f])),
'Negative') for f in negative_fileids]
# 训练与测试数据集比例为(80/20)
threshold_factor = 0.8
threshold_positive = int(threshold_factor * len(features_positive))
threshold_negative = int(threshold_factor * len(features_negative))
features_train = features_positive[:threshold_positive] + features_negative[:threshold_negative]
features_test = features_positive[threshold_positive:] + features_negative[threshold_negative:]
print("\nNumber of training datapoints:", len(features_train))
print("Number of test datapoints:", len(features_test))
# 训练朴素贝叶斯分类器
classifier = NaiveBayesClassifier.train(features_train)
print("\nAccuracy of the classifier:", nltk.classify.util.accuracy(classifier, features_test))
print("\nTop 10 most informative words:")
for item in classifier.most_informative_features()[:10]:
print(item[0])
# 输入简单的评论
input_reviews = [
"but it worked great. Dried my hair in about 15 minutes",
"Excellent value for travel dryer on a budget",
"Great hair dryer.",
"The direction was terrible and the story was all over the place"
]
print("\nPredictions:")
for review in input_reviews:
print("\nReview:", review)
probdist = classifier.prob_classify(extract_features(review.split()))
pred_sentiment = probdist.max()
print("Predicted sentiment:", pred_sentiment)
print("Probability:", round(probdist.prob(pred_sentiment), 2))
Number of training datapoints: 1600
Number of test datapoints: 400
Accuracy of the classifier: 0.735
Top 10 most informative words:
outstanding
insulting
vulnerable
ludicrous
uninvolving
avoids
astounding
fascination
symbol
animators
Predictions:
Review: but it worked great. Dried my hair in about 15 minutes
Predicted sentiment: Negative
Probability: 0.63
Review: Excellent value for travel dryer on a budget
Predicted sentiment: Negative
Probability: 0.62
Review: Great hair dryer.
Predicted sentiment: Positive
Probability: 0.54
Review: The direction was terrible and the story was all over the place
Predicted sentiment: Negative
Probability: 0.63
在学习过程中,笔者总是犯后知后觉的毛病,编程机器学习时才发现自己数学知识不够,学机器学习理论才发现自己造轮子的能力不够,好在还有时间给自己从头再学的机会。希望研究生考试结束后,再回过头来啃这些书,能有新的收获,和更深层次的理解。