机器学习笔记4-朴素贝叶斯

机器学习笔记4-朴素贝叶斯

朴素贝叶斯是基于贝叶斯定理和特征条件独立假设的分类方法。对于给定的训练数据集,首先基于特征条件独立假设学习输入/输出的联合概率分布;然后基于此模型,对给定输入x,利用贝叶斯定理求出后验概率最大的输出y。它经常被应用在文本分类中,包括互联网新闻的分类,垃圾邮件的筛选。

假定 P ( X , Y ) P(X,Y) X , Y X,Y 的联合概率分布。训练数据集 T = { ( x 1 , y 1 ) , ( x 2 , y 2 ) . . . ( x n , y n ) } T=\{(x_1,y_1),(x_2,y_2)...(x_n,y_n)\} P ( X , Y ) P(X,Y) 独立同分布产生。朴素贝叶斯法通过训练数据集学习联合概率分布 P ( X , Y ) P(X,Y) 。具体地,学习先验概率
P ( Y = c k ) , k = 1 , 2 ,   , K P(Y = {c_k}),k = 1,2, \cdots ,K K K 值为类标记数目
条件概率分布
P ( X = x Y = c k ) = P ( X ( 1 ) = x ( 1 ) ,   , X ( n ) = x ( n ) Y = c k ) , k = 1 , 2 ,   , K P(\left. {X = x} \right|Y = {c_k}) = P({X^{(1)}} = {x^{(1)}}, \cdots ,{X^{(n)}} = {x^{(n)}}\left| {Y = {c_k}} \right.),k = 1,2, \cdots ,K n n X X 的维数
于是可得联合概率分布。在特征条件独立假设的前提下(这一假设有时会牺牲分类准确率,因为有的特征是相互关联的,比如西瓜的体积和重量两个特征),条件概率分布可化简为
P ( X = x Y = c k ) = j = 1 n P ( X ( j ) = x ( j ) Y = c k ) P(\left. {X = x} \right|Y = {c_k}) = \prod\limits_{j = 1}^n {P({X^{(j)}} = {x^{(j)}}\left| {Y = {c_k}} \right.)}
根据贝叶斯定理,后验概率可写为
P ( Y = c k X = x ) = P ( X = x Y = c k ) P ( Y = c k ) k P ( Y = c k ) P ( X = x Y = c k ) = P ( Y = c k ) j = 1 n P ( X ( j ) = x ( j ) Y = c k ) k P ( Y = c k ) j = 1 n P ( X ( j ) = x ( j ) Y = c k ) P(\left. {Y = {c_k}} \right|X = x) = \frac{{P(\left. {X = x} \right|Y = {c_k})P(Y = {c_k})}}{{\sum\limits_k {P(Y = {c_k})P(\left. {X = x} \right|Y = {c_k})} }} = \frac{{P(Y = {c_k})\prod\limits_{j = 1}^n {P({X^{(j)}} = {x^{(j)}}\left| {Y = {c_k}} \right.)} }}{{\sum\limits_k {P(Y = {c_k})\prod\limits_{j = 1}^n {P({X^{(j)}} = {x^{(j)}}\left| {Y = {c_k}} \right.)} } }}
于是,朴素贝叶斯分类器可表示为:
y = f ( x ) = arg max c k P ( Y = c k ) j = 1 n P ( X ( j ) = x ( j ) Y = c k ) k P ( Y = c k ) j = 1 n P ( X ( j ) = x ( j ) Y = c k ) y = f(x) = \arg {\max _{{c_k}}}\frac{{P(Y = {c_k})\prod\limits_{j = 1}^n {P({X^{(j)}} = {x^{(j)}}\left| {Y = {c_k}} \right.)} }}{{\sum\limits_k {P(Y = {c_k})\prod\limits_{j = 1}^n {P({X^{(j)}} = {x^{(j)}}\left| {Y = {c_k}} \right.)} } }}
在上式中,分母对于所有 c k c_k 都是相同的,因此可简化为
y = f ( x ) = arg max c k P ( Y = c k ) j = 1 n P ( X ( j ) = x ( j ) Y = c k ) y = f(x) = \arg {\max _{{c_k}}}P(Y = {c_k})\prod\limits_{j = 1}^n {P({X^{(j)}} = {x^{(j)}}\left| {Y = {c_k}} \right.)}
为了求得上式,可应用极大似然法估计先验概率 P ( Y = c k ) P(Y = {c_k}) 和条件概率 P ( X = x Y = c k ) P(\left. {X = x} \right|Y = {c_k}) 。先验概率的极大似然估计为 P ( Y = c k ) = i = 1 N I ( y = c k ) N P(Y = {c_k}) = \frac{{\sum\limits_{i = 1}^N {I(y = {c_k})} }}{N} ,条件概率的极大似然估计是 P ( X ( j ) = a j l Y = c k ) = i = 1 N I ( x ( j ) = a j l , y i = c k ) i = 1 N I ( y i = c k ) P({X^{(j)}} = {a_{jl}}\left| {Y = {c_k}} \right.) = \frac{{\sum\limits_{i = 1}^N {I({x^{(j)}} = {a_{jl}},{y_i} = {c_k})} }}{{\sum\limits_{i = 1}^N {I({y_i} = {c_k})} }} x i ( j ) x_i^{(j)} 是第 i i 个样本的第 j j 个特征; a j l a_{jl} 是第 j j 个特征可能取的第 l l 个值。用极大似然估计可能会出现所要估计的概率值为零的情况,解决这一问题的方法时贝叶斯估计。具体地, P ( X ( j ) = a j l Y = c k ) = i = 1 N I ( x ( j ) = a j l , y i = c k ) + λ i = 1 N I ( y i = c k ) + S j λ P({X^{(j)}} = {a_{jl}}\left| {Y = {c_k}} \right.) = \frac{{\sum\limits_{i = 1}^N {I({x^{(j)}} = {a_{jl}},{y_i} = {c_k}) + \lambda } }}{{\sum\limits_{i = 1}^N {I({y_i} = {c_k}) + {S_j}\lambda } }} ,式中 λ > = 0 \lambda>=0 。当 λ = 1 \lambda=1 时,这时称为拉普拉斯平滑。

当特征属性为离散值时,只要很方便的统计训练样本中各个划分在每个类别中出现的频率,即可估计先验概率和条件概率。但当特征为连续属性时,可考虑概率密度函数。对于离散的特征值,python的机器学习库scikit-learn中内置了两个函数,包括多项式朴素贝叶斯(MultinomialNB)和伯努利朴素贝叶斯(BernoulliNB)。对于连续特征值,可以利用scikit-learn库中的高斯朴素贝叶斯(GaussianNB)。

  • 高斯朴素贝叶斯(GaussianNB)。当特征是连续变量的时候,假设特征分布为高斯分布,根据样本算出均值 μ y \mu _y 和方差 σ y 2 \sigma _y^2 ,再求得概率。
    P ( x i y ) = 1 2 π σ y 2 exp ( ( x i μ y ) 2 2 σ y 2 ) P({x_i}\left| y \right.) = \frac{1}{{\sqrt {2\pi \sigma _y^2} }}\exp ( - \frac{{{{({x_i} - {\mu _y})}^2}}}{{2\sigma _y^2}})
  • 多项式朴素贝叶斯(MultinomialNB)。在估算概率时,会做平滑处理。 N y x i N_{y{x_i}} 表示类别为 y y ,且特征为 x i x_i 的样本数; N y N_y 表示类别为 y y 的样本数。
    P ( x i y ) = N y x i + α N y + n α P({x_i}\left| y \right.) = \frac{{{N_{y{x_i}}} + \alpha }}{{{N_y} + n\alpha }}
  • 伯努利朴素贝叶斯(BernoulliNB)。类似于多项式朴素贝叶斯,也主要用于离散特征分类,和MultinomialNB的区别是:MultinomialNB以出现的次数为特征值,BernoulliNB为二进制或布尔型特性

以下是利用MultinomialNB实现的一小段代码,数据集是sklearn中的内置数据集:

from sklearn.datasets import fetch_20newsgroups
news=fetch_20newsgroups(subset='all')
x_train,y_train=news['data'],news['target']
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.naive_bayes import MultinomialNB

prepro=CountVectorizer()
x_train=prepro.fit_transform(x_train)

nBS=MultinomialNB(alpha=0.01)
nBS.fit(x_train,y_train)
y_train_predict=nBS.predict(x_train)

from sklearn import metrics
print(metrics.accuracy_score(y_train,y_train_predict))

参考:
李航《统计学习方法》
https://blog.csdn.net/kancy110/article/details/72763276
https://blog.csdn.net/qq_35044025/article/details/79322169
https://blog.csdn.net/gamer_gyt/article/details/51253445
https://www.cnblogs.com/pinard/p/6074222.html

猜你喜欢

转载自blog.csdn.net/zhennang1427/article/details/85115404