转载自: https://blog.csdn.net/kancy110/article/details/72763276
在scikit-learn中,提供了3中朴素贝叶斯分类算法:GaussianNB(高斯朴素贝叶斯)、MultinomialNB(多项式朴素贝叶斯)、BernoulliNB(伯努利朴素贝叶斯)
1、高斯朴素贝叶斯:sklearn.naive_bayes.GaussianNB(priors=None)
①利用GaussianNB类建立简单模型
In [1]: import numpy as np
...: from sklearn.naive_bayes import GaussianNB
...: X = np.array([[-1, -1], [-2, -2], [-3, -3],[-4,-4],[-5,-5], [1, 1], [2,
...: 2], [3, 3]])
...: y = np.array([1, 1, 1,1,1, 2, 2, 2])
...: clf = GaussianNB()#默认priors=None
...: clf.fit(X,y)
...:
Out[1]: GaussianNB(priors=None)
②经过训练集训练后,观察各个属性值
In [2]: clf.priors#无返回值,因priors=None
In [3]: clf.set_params(priors=[0.625, 0.375])#设置priors参数值
Out[3]: GaussianNB(priors=[0.625, 0.375])
In [4]: clf.priors#返回各类标记对应先验概率组成的列表
Out[4]: [0.625, 0.375]
In [5]: clf.class_prior_
Out[5]: array([ 0.625, 0.375])
In [6]: type(clf.class_prior_)
Out[6]: numpy.ndarray
In [7]: clf.class_count_
Out[7]: array([ 5., 3.])
In [8]: clf.theta_
Out[8]:
array([[-3., -3.],
[ 2., 2.]])
In [9]: clf.sigma_
Out[9]:
array([[ 2.00000001, 2.00000001],
[ 0.66666667, 0.66666667]])
③方法
In [10]: clf.get_params(deep=True)
Out[10]: {'priors': [0.625, 0.375]}
In [11]: clf.get_params()
Out[11]: {'priors': [0.625, 0.375]}
set_params(**params):设置估计器priors参数
In [3]: clf.set_params(priors=[ 0.625, 0.375])
Out[3]: GaussianNB(priors=[0.625, 0.375])
fit(X, y, sample_weight=None):训练样本,X表示特征向量,y类标记,sample_weight表各样本权重数组
In [12]: clf.fit(X,y,np.array([0.05,0.05,0.1,0.1,0.1,0.2,0.2,0.2]))#设置样本不同的权重
Out[12]: GaussianNB(priors=[0.625, 0.375])
In [13]: clf.theta_
Out[13]:
array([[-3.375, -3.375],
[ 2. , 2. ]])
In [14]: clf.sigma_
Out[14]:
array([[ 1.73437501, 1.73437501],
[ 0.66666667, 0.66666667]])
对于不平衡样本,类标记1在特征1均值及方差计算过程:
均值= ((-1*0.05)+(-2*0.05)+(-3*0.1)+(-4*0.1+(-5*0.1)))/(0.05+0.05+0.1+0.1+0.1)=-3.375
方差=((-1+3.375)**2*0.05 +(-2+3.375)**2*0.05+(-3+3.375)**2*0.1+(-4+3.375)**2*0.1+(-5+3.375)**2*0.1)/(0.05+0.05+0.1+0.1+0.1)=1.73437501
In [18]: import numpy as np
...: from sklearn.naive_bayes import GaussianNB
...: X = np.array([[-1, -1], [-2, -2], [-3, -3],[-4,-4],[-5,-5], [1, 1], [2
...: , 2], [3, 3]])
...: y = np.array([1, 1, 1,1,1, 2, 2, 2])
...: clf = GaussianNB()#默认priors=None
...: clf.partial_fit(X,y,classes=[1,2],sample_weight=np.array([0.05,0.05,0.
...: 1,0.1,0.1,0.2,0.2,0.2]))
...:
Out[18]: GaussianNB(priors=None)
In [19]: clf.class_prior_
Out[19]: array([ 0.4, 0.6])
In [20]: clf.predict([[-6,-6],[4,5]])
Out[20]: array([1, 2])
In [21]: clf.predict_proba([[-6,-6],[4,5]])
Out[21]:
array([[ 1.00000000e+00, 4.21207358e-40],
[ 1.12585521e-12, 1.00000000e+00]])
In [22]: clf.predict_log_proba([[-6,-6],[4,5]])
Out[22]:
array([[ 0.00000000e+00, -9.06654487e+01],
[ -2.75124782e+01, -1.12621024e-12]])
In [23]: clf.score([[-6,-6],[-4,-2],[-3,-4],[4,5]],[1,1,2,2])
Out[23]: 0.75
In [24]: clf.score([[-6,-6],[-4,-2],[-3,-4],[4,5]],[1,1,2,2],sample_weight=[0.3
...: ,0.2,0.4,0.1])
Out[24]: 0.59999999999999998
2、多项式朴素贝叶斯:sklearn.naive_bayes.MultinomialNB(alpha=1.0, fit_prior=True, class_prior=None)主要用于离散特征分类,例如文本分类单词统计,以出现的次数作为特征值
参数说明:
alpha:浮点型,可选项,默认1.0,添加拉普拉修/Lidstone平滑参数
fit_prior:布尔型,可选项,默认True,表示是否学习先验概率,参数为False表示所有类标记具有相同的先验概率
class_prior:类似数组,数组大小为(n_classes,),默认None,类先验概率
①利用MultinomialNB建立简单模型
In [2]: import numpy as np
...: from sklearn.naive_bayes import MultinomialNB
...: X = np.array([[1,2,3,4],[1,3,4,4],[2,4,5,5],[2,5,6,5],[3,4,5,6],[3,5,6,
...: 6]])
...: y = np.array([1,1,4,2,3,3])
...: clf = MultinomialNB(alpha=2.0)
...: clf.fit(X,y)
...:
Out[2]: MultinomialNB(alpha=2.0, class_prior=None, fit_prior=True)
②经过训练后,观察各个属性值
a、若指定了class_prior参数,不管fit_prior为True或False,class_log_prior_取值是class_prior转换成log后的结果
In [4]: import numpy as np
...: from sklearn.naive_bayes import MultinomialNB
...: X = np.array([[1,2,3,4],[1,3,4,4],[2,4,5,5],[2,5,6,5],[3,4,5,6],[3,5,6,
...: 6]])
...: y = np.array([1,1,4,2,3,3])
...: clf = MultinomialNB(alpha=2.0,fit_prior=True,class_prior=[0.3,0.1,0.3,0
...: .2])
...: clf.fit(X,y)
...: print(clf.class_log_prior_)
...: print(np.log(0.3),np.log(0.1),np.log(0.3),np.log(0.2))
...: clf1 = MultinomialNB(alpha=2.0,fit_prior=False,class_prior=[0.3,0.1,0.3
...: ,0.2])
...: clf1.fit(X,y)
...: print(clf1.class_log_prior_)
...:
[-1.2039728 -2.30258509 -1.2039728 -1.60943791]
-1.20397280433 -2.30258509299 -1.20397280433 -1.60943791243
[-1.2039728 -2.30258509 -1.2039728 -1.60943791]
b、若fit_prior参数为False,class_prior=None,则各类标记的先验概率相同等于类标记总个数N分之一
In [5]: import numpy as np
...: from sklearn.naive_bayes import MultinomialNB
...: X = np.array([[1,2,3,4],[1,3,4,4],[2,4,5,5],[2,5,6,5],[3,4,5,6],[3,5,6,
...: 6]])
...: y = np.array([1,1,4,2,3,3])
...: clf = MultinomialNB(alpha=2.0,fit_prior=False)
...: clf.fit(X,y)
...: print(clf.class_log_prior_)
...: print(np.log(1/4))
...:
[-1.38629436 -1.38629436 -1.38629436 -1.38629436]
-1.38629436112
c、若fit_prior参数为True,class_prior=None,则各类标记的先验概率相同等于各类标记个数除以各类标记个数之和
In [6]: import numpy as np
...: from sklearn.naive_bayes import MultinomialNB
...: X = np.array([[1,2,3,4],[1,3,4,4],[2,4,5,5],[2,5,6,5],[3,4,5,6],[3,5,6,
...: 6]])
...: y = np.array([1,1,4,2,3,3])
...: clf = MultinomialNB(alpha=2.0,fit_prior=True)
...: clf.fit(X,y)
...: print(clf.class_log_prior_)#按类标记1、2、3、4的顺序输出
...: print(np.log(2/6),np.log(1/6),np.log(2/6),np.log(1/6))
...:
[-1.09861229 -1.79175947 -1.09861229 -1.79175947]
-1.09861228867 -1.79175946923 -1.09861228867 -1.79175946923
In [7]: clf.class_log_prior_
Out[7]: array([-1.09861229, -1.79175947, -1.09861229, -1.79175947])
In [8]: clf.intercept_
Out[8]: array([-1.09861229, -1.79175947, -1.09861229, -1.79175947])
In [9]: clf.feature_log_prob_
Out[9]:
array([[-2.01490302, -1.45528723, -1.2039728 , -1.09861229],
[-1.87180218, -1.31218639, -1.178655 , -1.31218639],
[-1.74919985, -1.43074612, -1.26369204, -1.18958407],
[-1.79175947, -1.38629436, -1.23214368, -1.23214368]])
特征条件概率计算过程,以类为1各个特征对应的条件概率为例
In [9]: clf.feature_log_prob_
Out[9]:
array([[-2.01490302, -1.45528723, -1.2039728 , -1.09861229],
[-1.87180218, -1.31218639, -1.178655 , -1.31218639],
[-1.74919985, -1.43074612, -1.26369204, -1.18958407],
[-1.79175947, -1.38629436, -1.23214368, -1.23214368]])
In [10]: print(np.log((1+1+2)/(1+2+3+4+1+3+4+4+4*2)),np.log((2+3+2)/(1+2+3+4+1+
...: 3+4+4+4*2)),np.log((3+4+2)/(1+2+3+4+1+3+4+4+4*2)),np.log((4+4+2)/(1+2+
...: 3+4+1+3+4+4+4*2)))
-2.01490302054 -1.45528723261 -1.20397280433 -1.09861228867
特征的条件概率=(指定类下指定特征出现的次数+alpha)/(指定类下所有特征出现次数之和+类的可能取值个数*alpha)
In [11]: clf.coef_
Out[11]:
array([[-2.01490302, -1.45528723, -1.2039728 , -1.09861229],
[-1.87180218, -1.31218639, -1.178655 , -1.31218639],
[-1.74919985, -1.43074612, -1.26369204, -1.18958407],
[-1.79175947, -1.38629436, -1.23214368, -1.23214368]])
In [12]: clf.class_count_
Out[12]: array([ 2., 1., 2., 1.])
In [13]: clf.feature_count_
Out[13]:
array([[ 2., 5., 7., 8.],
[ 2., 5., 6., 5.],
[ 6., 9., 11., 12.],
[ 2., 4., 5., 5.]])
In [14]: print([(1+1),(2+3),(3+4),(4+4)])#以类别1为例
[2, 5, 7, 8]
③方法
In [15]: import numpy as np
...: from sklearn.naive_bayes import MultinomialNB
...: X = np.array([[1,2,3,4],[1,3,4,4],[2,4,5,5],[2,5,6,5],[3,4,5,6],[3,5,6
...: ,6]])
...: y = np.array([1,1,4,2,3,3])
...: clf = MultinomialNB(alpha=2.0,fit_prior=True)
...: clf.fit(X,y)
...:
Out[15]: MultinomialNB(alpha=2.0, class_prior=None, fit_prior=True)
In [16]: clf.get_params(True)
Out[16]: {'alpha': 2.0, 'class_prior': None, 'fit_prior': True}
In [17]: import numpy as np
...: from sklearn.naive_bayes import MultinomialNB
...: X = np.array([[1,2,3,4],[1,3,4,4],[2,4,5,5],[2,5,6,5],[3,4,5,6],[3,5,6
...: ,6]])
...: y = np.array([1,1,4,2,3,3])
...: clf = MultinomialNB(alpha=2.0,fit_prior=True)
...: clf.partial_fit(X,y)
...: clf.partial_fit(X,y,classes=[1,2])
...:
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
4 y = np.array([1,1,4,2,3,3])
5 clf = MultinomialNB(alpha=2.0,fit_prior=True)
----> 6 clf.partial_fit(X,y)
7 clf.partial_fit(X,y,classes=[1,2])
ValueError: classes must be passed on the first call to partial_fit.
In [18]: import numpy as np
...: from sklearn.naive_bayes import MultinomialNB
...: X = np.array([[1,2,3,4],[1,3,4,4],[2,4,5,5],[2,5,6,5],[3,4,5,6],[3,5,6
...: ,6]])
...: y = np.array([1,1,4,2,3,3])
...: clf = MultinomialNB(alpha=2.0,fit_prior=True)
...: clf.partial_fit(X,y,classes=[1,2])
...: clf.partial_fit(X,y)
...:
...:
Out[18]: MultinomialNB(alpha=2.0, class_prior=None, fit_prior=True)
In [19]: clf.predict([[1,3,5,6],[3,4,5,4]])
Out[19]: array([1, 1])
In [22]: import numpy as np
...: from sklearn.naive_bayes import MultinomialNB
...: X = np.array([[1,2,3,4],[1,3,4,4],[2,4,5,5],[2,5,6,5],[3,4,5,6],[3,5,6
...: ,6]])
...: y = np.array([1,1,4,2,3,3])
...: clf = MultinomialNB(alpha=2.0,fit_prior=True)
...: clf.fit(X,y)
...:
Out[22]: MultinomialNB(alpha=2.0, class_prior=None, fit_prior=True)
In [23]: clf.predict_log_proba([[3,4,5,4],[1,3,5,6]])
Out[23]:
array([[-1.27396027, -1.69310891, -1.04116963, -1.69668527],
[-0.78041614, -2.05601551, -1.28551649, -1.98548389]])
In [1]: import numpy as np
...: from sklearn.naive_bayes import MultinomialNB
...: X = np.array([[1,2,3,4],[1,3,4,4],[2,4,5,5],[2,5,6,5],[3,4,5,6],[3,5,6,
...: 6]])
...: y = np.array([1,1,4,2,3,3])
...: clf = MultinomialNB(alpha=2.0,fit_prior=True)
...: clf.fit(X,y)
...:
Out[1]: MultinomialNB(alpha=2.0, class_prior=None, fit_prior=True)
In [2]: clf.predict_proba([[3,4,5,4],[1,3,5,6]])
Out[2]:
array([[ 0.27972165, 0.18394676, 0.35304151, 0.18329008],
[ 0.45821529, 0.12796282, 0.27650773, 0.13731415]])
In [3]: clf.score([[3,4,5,4],[1,3,5,6]],[1,1])
Out[3]: 0.5
In [4]: clf.set_params(alpha=1.0)
Out[4]: MultinomialNB(alpha=1.0, class_prior=None, fit_prior=True)
3、伯努利朴素贝叶斯:sklearn.naive_bayes.BernoulliNB(alpha=1.0, binarize=0.0, fit_prior=True,class_prior=None)类似于多项式朴素贝叶斯,也主要用户离散特征分类,和MultinomialNB的区别是:MultinomialNB以出现的次数为特征值,BernoulliNB为二进制或布尔型特性
参数说明:
binarize:将数据特征二值化的阈值
①利用BernoulliNB建立简单模型
In [5]: import numpy as np
...: from sklearn.naive_bayes import BernoulliNB
...: X = np.array([[1,2,3,4],[1,3,4,4],[2,4,5,5]])
...: y = np.array([1,1,2])
...: clf = BernoulliNB(alpha=2.0,binarize = 3.0,fit_prior=True)
...: clf.fit(X,y)
...:
Out[5]: BernoulliNB(alpha=2.0, binarize=3.0, class_prior=None, fit_prior=True)
经过binarize = 3.0二值化处理,相当于输入的X数组为
In [7]: X = np.array([[0,0,0,1],[0,0,1,1],[0,1,1,1]])
In [8]: X
Out[8]:
array([[0, 0, 0, 1],
[0, 0, 1, 1],
[0, 1, 1, 1]])
②训练后查看各属性值
In [9]: clf.class_log_prior_
Out[9]: array([-0.40546511, -1.09861229])
Out[10]:
array([[-1.09861229, -1.09861229, -0.69314718, -0.40546511],
[-0.91629073, -0.51082562, -0.51082562, -0.51082562]])
上述结果计算过程:
假设X对应的四个特征为A1、A2、A3、A4,类别为y1,y2,类别为y1时,特征A1的概率为:P(A1|y=y1) = P(A1=0|y=y1)*A1+P(A1=1|y=y1)*A1
In [11]: import numpy as np
...: from sklearn.naive_bayes import BernoulliNB
...: X = np.array([[1,2,3,4],[1,3,4,4],[2,4,5,5]])
...: y = np.array([1,1,2])
...: clf = BernoulliNB(alpha=2.0,binarize = 3.0,fit_prior=True)
...: clf.fit(X,y)
...: print(clf.feature_log_prob_)
...: print([np.log((2+2)/(2+2*2))*0+np.log((0+2)/(2+2*2))*1,np.log((2+2)/(2
...: +2*2))*0+np.log((0+2)/(2+2*2))*1,np.log((1+2)/(2+2*2))*0+np.log((1+2)/
...: (2+2*2))*1,np.log((0+2)/(2+2*2))*0+np.log((2+2)/(2+2*2))*1])
...:
[[-1.09861229 -1.09861229 -0.69314718 -0.40546511]
[-0.91629073 -0.51082562 -0.51082562 -0.51082562]]
[-1.0986122886681098, -1.0986122886681098, -0.69314718055994529, -0.405465108108
16444]
In [12]: clf.class_count_
Out[12]: array([ 2., 1.])
In [13]: clf.feature_count_
Out[13]:
array([[ 0., 0., 1., 2.],
[ 0., 1., 1., 1.]])
③方法:同MultinomialNB的方法类似