第九章-EM算法

从第九章开始,学习总结的东西有所不同了,第2-8章是分类问题,都属于监督学习,第9章EM算法是非监督学习。本文主要是总结EM算法的应用以及处理问题的过程和原理推导。

EM算法

EM算法(期望极大算法 expectation maximization algorithm)是一种迭代算法。当我们面对概率模型的时候,既有观测变量,又含有隐变量或者潜在变量。如果概率模型的变量都是观测变量,那么给定数据,可以直接使用极大似然估计法或者贝叶斯估计模型估计参数,但是,当模型含有隐变量的时候,就不能简单地这样估计,此时,在1977年,Dempster等人总结提出EM算法:E步:求期望(expectation);M步:求极大值(maximization)
\[ 输入:观测变量数据Y,隐变量数据Z,联合分布P(Y,Z|\theta),条件分布P(Z|Y,\theta)。\\ 输出:模型参数\theta。\\ (1)选择参数的初值\theta^{(0)},开始迭代。\\ (2)**E步:**记\theta^{(i)}为第i次迭代参数\theta的估计值,在第i+1次迭代的E步,\\计算\begin{aligned} Q(\theta,\theta^{(i)}) =& E_Z\big[\ln P(Y,Z|\theta) | Y, \theta^{(i)}\big] \ =& \sum_Z \ln P(Y,Z|\theta)P(Z|Y,\theta^{(i)}) \end{aligned}\\这里,P(Z|Y,\theta^{(i)})是在给定观测数据Y和当前的参数估计\theta^{(i)}下隐变量数据Z的条件概率分布。\\ (3)**M步:**求使得Q(\theta,\theta^{(i)})极大化的\theta,确定第i+1次迭代的参数估计值\theta^{(i+1)} \theta^{(i+1)}=\mathop{\arg \max} \limits_{\theta} Q(\theta,\theta^{(i)}) \\(4)重复第(2)步和第(3)步,直到收敛(收敛条件:\theta^{(i)}和\theta^{(i+1)}很接近,\\或者是Q(\theta^{(i+1)},\theta^{(i)})和Q(\theta^{(i)},\theta^{(i-1)})很接近)。 函数Q(\theta,\theta^{(i)})是EM算法的核心,称为Q函数。 \]

推导过程

上述阐述了EM算法,可是为什么EM算法能近似实现对观测数据的极大似然估计呢?下面通过近似求解观测数据的对数似然函数的极大化问题来导出EM算法,从而了解EM算法的作用。

在推导过程中用到的公式:

\[ Jenson不等式:f(\sum_i \alpha_i x_i) \geqslant \sum_i \alpha_i f(x_i)其中函数f是凸函数,\\那么对数函数也是凸函数,\displaystyle \sum_i \alpha_i = 1,\alpha_i表示权值,0 \leqslant \alpha_i \leqslant 1 \]

\[ 首先有一个需要观测的向量\theta,观测数据Y=(y_1,y_2,\cdots,y_N),隐变量Z=(z_1,z_2,\cdots,z_N),\\当求解\theta时,似然函数为\begin{aligned} L(\theta) = \ln P(Y|\theta) \ = \ln \sum_Z P(Y,Z|\theta) \ = \ln ( \sum_Z P(Z|\theta) P(Y|Z,\theta) ) \end{aligned}  \\假设在第i次迭代后\theta的估计值为\theta^{(i)},希望新估计值\theta能使L(\theta)增加,即L(\theta) > L(\theta^{(i)}),则可计算两者的差:\\ L(\theta)-L(\theta^{(i)}) = \ln ( \sum_Z P(Z|\theta) P(Y|Z,\theta) ) - \ln P(Y|\theta^{(i)})\\   一般来说,对\ln P_1 P_2 \cdots P_N比较好处理,但是如果是\ln \sum P_1 P_2就不好处理,\\为了将\sum求和符号去掉,用Jenson不等式进行缩放处理。\\   对于上述形式,对Z求和,要如何凑出来一个具有Jenson不等式中的\alpha_i呢?\\很容易想到,关于Z的密度函数,该密度函数取值求和为1,需要构造一个Z的概率分布。\\ \begin{aligned}L(\theta)-L(\theta^{(i)}) = \ln ( \sum_Z P(Z|\theta) P(Y|Z,\theta)) - \ln P(Y|\theta^{(i)}) \\ = \ln ( \sum_Z P(Z|Y,\theta^{(i)}) \frac{P(Z|\theta)P(Y|Z,\theta)}{P(Z|Y,\theta^{(i)})} ) - \ln P(Y|\theta^{(i)}) \end{aligned}\\ 由Jesson不等式,\displaystyle \ln ( \sum_Z P(Z|Y,\theta^{(i)}) \frac{P(Z|\theta)P(Y|Z,\theta)}{P(Z|Y,\theta^{(i)})} ) \geqslant \sum_Z P(Z|Y,\theta^{(i)}) \ln \frac{P(Z|\theta)P(Y|Z,\theta)}{P(Z|Y,\theta^{(i)})}\\ \displaystyle \because \ln P(Y|\theta^{(i)}) = \sum_Z P(Z|Y,\theta^{(i)}) \ln P(Y|\theta^{(i)})\\ \begin{aligned} \therefore L(\theta)-L(\theta^{(i)}) \geqslant \sum_Z P(Z|Y,\theta^{(i)}) \ln \frac{P(Z|\theta)P(Y|Z,\theta)}{P(Z|Y,\theta^{(i)})} - \sum_Z P(Z|Y,\theta^{(i)}) \ln P(Y|\theta^{(i)}) \\= \sum_Z P(Z|Y,\theta^{(i)}) \ln \frac{P(Z|\theta)P(Y|Z,\theta)}{P(Z|Y,\theta^{(i)}) P(Y|\theta^{(i)})} \end{aligned}\\ 令\displaystyle B(\theta,\theta^{(i)}) = L(\theta^{(i)}) + \sum_Z P(Z|Y,\theta^{(i)}) \ln \frac{P(Z|\theta)P(Y|Z,\theta)}{P(Z|Y,\theta^{(i)}) P(Y|\theta^{(i)})} \\ \therefore L(\theta) \geqslant B(\theta,\theta^{(i)}),\\也就是说B(\theta,\theta^{(i)})是L(\theta)的一个下界,要最大化L(\theta),即最大化B(\theta,\theta^{(i)})。\\ \begin{aligned} \therefore \theta^{(i+1)} = \mathop{\arg \max} \limits_{\theta} B(\theta,\theta^{(i)}) = \mathop{\arg \max} \limits_{\theta} ( \sum_Z P(Z|Y,\theta^{(i)}) \ln P(Z|\theta)P(Y|Z,\theta)) \\ = \mathop{\arg \max} \limits_{\theta} ( \sum_Z P(Z|Y,\theta^{(i)}) \ln P(Y,Z|\theta)) \end{aligned}\\ \displaystyle \because Q(\theta, \theta^{(i)}) = \sum_Z \ln P(Y,Z|\theta) P(Z|Y,\theta^{(i)}) \\ \displaystyle \therefore \theta^{(i+1)} = \mathop{\arg \max} \limits_{\theta} (Q(\theta, \theta^{(i)}))\\   等价于EM算法的M步,E步等价于求\displaystyle \sum_Z P(Z|Y,\theta^{(i)}) \ln P(Y,Z|\theta),\\以上就得到了EM算法,通过不断求解下界的极大化逼近求解对数似然函数极大化。 \]

EM算法在高斯混合模型学习中的应用

高斯混合模型

\[ 高斯混合模型是指具有如下形式的概率分布模型:\\P(y|\theta)=\sum_{k=1}^K \alpha_k \phi(y|\theta_k)\\其中,\alpha_k是系数,\displaystyle \alpha_k \geqslant 0, \sum_{k=1}^K \alpha_k = 1,\phi(y|\theta_k)是高斯分布密度,\\\theta_k=(\mu_k, \sigma_k^2),\phi(y|\theta)=\frac{1}{\sqrt{2 \pi} \sigma_k} \exp \left( -\frac{(y-\mu_k)^2}{2\sigma_k^2} \right)称为第k个分模型。\\   首先介绍高斯混合模型,只考虑简单的一维随机变量y,高斯分布就是正态分布,\\y \sim N(\mu, \sigma^2),给定y的观测值,就可以很容易求出\mu和\sigma^2,但是目前y不是来自高斯分布,\\而是有一定概率的来自于两个不同的高斯分布N(\mu_1, \sigma_1^2)和N(\mu_2, \sigma_2^2),这个就是两个高斯分布的混合,\\并不知道y来自于哪一个高斯分布,这里涉及到了隐变量。对于包含隐变量的参数估计,对此可以做以下处理。\\用一个向量\gamma表示z,如果z=1,则\gamma=(1,0,0,\cdots,0),\\如果z=2,则\gamma=(0,1,0,\cdots,0),这个相当于One-hot,\\也就是说z是第i个高斯分布,在\gamma的第i个分量为1,其他分量都为0。 \]

推导过程

明确隐变量,写出完全数据的对数似然函数

\[ 根据EM算法,存在一个隐变量\gamma,\gamma表示当前的y来自的高斯分布,对于第一个观测值,\\有\gamma_1=(\gamma_{11},\gamma_{12},\cdots,\gamma_{1K}),其中根据书中的\gamma_{jk}的定义:\gamma_{jk} = \left\{ \begin{aligned} 1, & 第j个观测来自第k个分模型 \\ 0, & 否则 \end{aligned}\right. \\ j = 1,2,\cdots, N; k=1, 2,\cdots,K以上是随机变量的分布,取第1个值的概率为\alpha_1,\\取第2个值的概率为\alpha_2,……,取第K个值的概率为\alpha_K,一旦知道\gamma_1的值,就知道从第几个高斯分布中抽取y_1。\\ \begin{aligned} p(\gamma_1,y_1|\theta)= p(\gamma_1|\theta) \cdot p(y_1 | \gamma_1,\theta) \\ = \alpha^{\gamma_{11}}1 \cdot \alpha^{\gamma{12}}2 \cdots \alpha^{\gamma{1K}}K \phi(y_1|\theta_1)^{\gamma{11}} \phi(y_2|\theta_2)^{\gamma_{12}} \cdots \phi(y_1|\theta_K)^{\gamma_{1K}} = \prod_{k=1}^K [ \alpha_k \phi(y_1 | \theta_k) ]^{\gamma_{1k}} \end{aligned}\\这个是第1个样本点完全数据的密度函数。在极大化似然估计中是极大化似然函数,这需要所有样本点的联合分布,\\对于所有的样本点,概率密度函数为 \\P(y,\gamma|\theta)=\prod_{j=1}^N \prod_{k=1}^K [\alpha_k \phi(y_j | \theta_k)]^{\gamma_{jk}}\\\displaystyle \because \prod_{j=1}^N \prod_{k=1}^K \alpha_k^{\gamma_{jk}} = \prod_{k=1}^K \alpha_k^{\sum_{j=1}^N \gamma_{jk}},\displaystyle \sum_{j=1}^N \gamma_{jk}表示在N个样本点中,一共有多少个是来自第k个高斯分布的,将该数量记为\\\displaystyle n_k=\sum_{j=1}^N \gamma_{jk},n_1+n_2+\cdots+n_K=N\\ \displaystyle \therefore \prod_{k=1}^K \alpha_k^{\sum_{j=1}^N \gamma_{jk}} = \prod_{k=1}^K \alpha_k^{n_k}\\ \displaystyle \therefore \prod_{j=1}^N \prod_{k=1}^K \alpha_k^{\gamma_{jk}} = \prod_{k=1}^K \alpha_k^{n_k}\\ \displaystyle \therefore P(y, \gamma|\theta) = \prod_{k=1}^K \alpha_k^{n_k} \prod_{j=1}^N \big[\phi(y_i|\theta_k)\big]^{\gamma_{jk}} = \prod_{k=1}^K \alpha_k^{n_k} \cdot \prod_{j=1}^N [ \frac{1}{\sqrt{2\pi} \sigma_k} \exp\left(-\frac{(y_j-\mu_k)^2}{2 \sigma_k^2} ) \right]^{\gamma_{jk}}\\ \displaystyle \therefore \ln P(y, \gamma|\theta) = \sum_{k=1}^K \{ n_k \ln \alpha_k + \sum_{j=1}^N \gamma_{jk} [\ln (\frac{1}{\sqrt{2\pi}}) - \ln \sigma_k - \frac{1}{2 \sigma_k^2} (y_j - \mu_k)^2] \} \\ \]

EM算法的E步,确定Q函数

\[ 将隐变量都换成期望,隐变量有\gamma_{jk}和n_k\\ \displaystyle \because E(n_k) = E \left(\sum_j \gamma_{jk} \right) = \sum_j E(\gamma_{jk}),E(\gamma_{jk} | \theta^{(i)},y) = P(\gamma_{jk}=1| \theta^{(i)},y),\\求解期望时,是根据上一步的\theta^{(i)}以及观测数据所有的y_j,需要知道\gamma_{jk}的分布P(\gamma_{jk}=1|\theta^{(i)},y)。\\ \begin{aligned} \because P(\gamma_{jk}=1|\theta^{(i)},y) =& \frac{P(\gamma_{jk}=1,y_j | \theta^{(i)})}{P(y_j|\theta^{(i)})} \\ =& \frac{P(\gamma_{jk}=1,y_j | \theta^{(i)})}{\displaystyle \sum_{k=1}^K P(\gamma_{jk}=1,y_j | \theta^{(i)})} \\ =& \frac{P(\gamma_{jk}=1|\theta^{(i)})P(y_i|\gamma_{jk}=1, \theta^{(i)})}{\displaystyle \sum_{k=1}^K P(y_j | \gamma_{jk}=1,\theta^{(i)}) P(\gamma_{jk}=1|\theta^{(i)})} \end{aligned}\\ \because \alpha_k=P(\gamma_{jk}=1|\theta), \phi(y_i|\theta)=P(y_i | \gamma_{jk}=1,\theta)\\ \displaystyle \therefore E(\gamma_{jk} | y, \theta^{(i)}) = P(\gamma_{jk}=1|\theta^{(i)},y) = \frac{\alpha_k \phi(y_i|\theta^{(i)})}{\displaystyle \sum_{k=1}^K \alpha_k \phi(y_i|\theta^{(i)})},其中\theta^{(i)}=(\alpha_k^{(i)}, \theta_k^{(i)})\\   将\gamma_{jk}关于给定y和\theta^{(i)}的条件下的期望记为Z_k=E(\gamma_{jk} | y, \theta^{(i)}),\\因为各个样本之间是独立同分布的,所以Z_k是和j无关的。\\ \displaystyle \therefore Q(\theta, \theta^{(i)}) = E_Z \big[ln P(y,\gamma | \theta^{(i)})\big] = \sum_{k=1}^K \left\{ (N Z_k) \ln \alpha_k + Z_k \sum_{j=1}^N [ \ln (\frac{1}{\sqrt{2\pi}}) - \ln \sigma_k - \frac{1}{2 \sigma_k^2} (y_j - \mu_k)^2] \right\} \]

确定EM算法的M步

\[ 需要估计的变量有\alpha_k,\sigma_k,\mu_k,然后求偏导等于0:\\\begin{array}{l} \displaystyle \frac{\partial Q(\theta, \theta^{(i)})}{\partial \mu_k} = 0 \\ \displaystyle \frac{\partial Q(\theta, \theta^{(i)})}{\partial \sigma_k^2} = 0 \\ \left \{ \begin{array}{l} \displaystyle \frac{\partial Q(\theta, \theta^{(i)})}{\partial \alpha_k} = 0 \\ \sum \alpha_k = 1 \end{array} \right . \end{array}\\根据上述公式可以推导出:\begin{array}{l} \mu_k^{(i+1)} = \frac{\displaystyle \sum_{j=1}^N \hat{\gamma_{jk}} y_j}{\displaystyle \sum_{j=1}^N \hat{\gamma_{jk}} } \\ (\sigma_k^2)^{(i+1)} = \frac{\displaystyle \sum_{j=1}^N \hat{\gamma_{jk}} (y_i - \mu_k)^2}{\displaystyle \sum_{j=1}^N \hat{\gamma_{jk}} } \\ \displaystyle \alpha_k^{(i+1)} = \frac{n_k}{N}= \frac{\displaystyle \sum_{j=1}^N \hat{\gamma_{jk}}}{N} \\ \end{array},其中\displaystyle \hat{\gamma_{jk}}=E \gamma_{jk}, n_k = \sum_{j=1}^N E \gamma_{jk}, k=1,2,\cdots,K \]

EM算法的推广

GEM算法

\[ 输入:观测数据,Q函数\\ 输出:模型参数\\ (1)初始化参数\theta^{(0)}=(\theta^{(0)}_1,\theta^{(0)}_2,\cdots,\theta^{(0)}_d),开始迭代;\\ (2)第i+1次迭代,第1步:记\theta^{(i)}=(\theta^{(i)}_1,\theta^{(i)}_2,\cdots,\theta^{(i)}_d)为参数\theta=(\theta_1,\theta_2,\cdots,\theta_d)的估计值,计算\\\begin{aligned} Q(\theta,\theta^{(i)}) =& E_Z\big[ \log P(Y,Z|\theta)|Y,\theta^{(i)} \big] \ =& \sum_Z P(Z|Y,\theta^{(i)}) \log P(Y,Z|\theta) \end{aligned}\\ (3)第2步:进行d次条件极大化: 首先,在\theta^{(i)}_2,\theta^{(i)}_3,\cdots,\theta^{(i)}_k保持不变的条件下求使得Q(\theta,\theta^{(i)})达到极大的\theta^{(i+1)}_1;\\ 然后,在\theta_1=\theta^{(i+1)}_1,\theta_j=\theta^{(j)}_j,j=3,4,\cdots,k的条件下求使Q(\theta,\theta^{(i)})达到极大的\theta^{(i+1)};\\ 如此继续,经过d次条件极大化,得到\theta^{(i+1)}=(\theta^{(i+1)}_1,\theta^{(i+1)}_2,\cdots, \theta^{(i+1)}_d)使得Q(\theta^{(i+1)},\theta^{(i)}) > Q(\theta^{(i)},\theta^{(i)})\\ (4)重复(2)和(3),直到收敛。 \]

猜你喜欢

转载自www.cnblogs.com/cecilia-2019/p/11537294.html