假设我们现在有若干文档,我们想比较其中两个文档的相似度,或者说我们希望知道这两份文档说的是不是一样的或者类似的事情。一个 naive 的想法就是,直接进行词频统计,也就是说,先选取一个大小为 M M M 的常用词汇集 W W W ,然后比较这样两个向量
p ( w 1 ∣ d 1 ) , p ( w 2 ∣ d 1 ) , ⋯ , p ( w M ∣ d 1 ) p(w_1|d_1), p(w_2|d_1),\cdots , p(w_M|d_1) p ( w 1 ∣ d 1 ) , p ( w 2 ∣ d 1 ) , ⋯ , p ( w M ∣ d 1 )
p ( w 1 ∣ d 2 ) , p ( w 2 ∣ d 2 ) , ⋯ , p ( w M ∣ d 2 ) p(w_1|d_2), p(w_2|d_2), \cdots, p(w_M|d_2) p ( w 1 ∣ d 2 ) , p ( w 2 ∣ d 2 ) , ⋯ , p ( w M ∣ d 2 )
这个方法很显然是不太可行的,因为词汇类似但内容迥异或者词汇迥异但内容类似的情况数见不鲜。
一个更加好的想法是,考虑引入若干隐变量,这些隐变量表示的含义是“主题”。也就是说,引入 K K K 个主题隐变量 Z = ( z 1 , z 2 , ⋯ , z K ) Z = (z_1, z_2, \cdots, z_K) Z = ( z 1 , z 2 , ⋯ , z K ) ,在此基础上我们比较两份文档的这两个向量
p ( z 1 ∣ d 1 ) , p ( z 2 ∣ d 1 ) , ⋯ , p ( z K , d 1 ) p(z_1|d_1), p(z_2|d_1), \cdots, p(z_K,d_1) p ( z 1 ∣ d 1 ) , p ( z 2 ∣ d 1 ) , ⋯ , p ( z K , d 1 )
p ( z 1 ∣ d 2 ) , p ( z 2 ∣ d 2 ) , ⋯ , p ( z K ∣ d 2 ) p(z_1|d_2), p(z_2|d_2), \cdots, p(z_K|d_2) p ( z 1 ∣ d 2 ) , p ( z 2 ∣ d 2 ) , ⋯ , p ( z K ∣ d 2 )
这两个向量分别表示了两份文档在多大的程度上谈论了什么话题。下面的主要问题就在于计算这些条件概率。
假设我们现在有 N N N 份文档,预设的词汇集内共有 M M M 个词汇,引入的主题隐变量有 K K K 个。首先,我们已经有的数据是这样一个矩阵 ( c i j ) (c_i^j) ( c i j ) ,表示第 i i i 个文档中的第 j j j 个词汇出现了多少次(这一矩阵称为共现矩阵,顾名思义是记录文档-词汇共同出现的矩阵)。那么对于我们的样本,可能性函数为
L = ∏ i = 1 N ∏ j = 1 M p ( d i , w j ) n ( d i , w j ) L = \prod_{i=1}^{N} \prod_{j=1}^M p(d_i, w_j)^{n(d_i, w_j)} L = i = 1 ∏ N j = 1 ∏ M p ( d i , w j ) n ( d i , w j )
对上式取自然对数,得到
ln L = ∑ i = 1 N ∑ j = 1 M n ( d i , w j ) ln p ( d i , w j ) \ln{L} = \sum_{i=1}^N \sum_{j=1}^M n(d_i, w_j)\ln{p(d_i, w_j)} ln L = i = 1 ∑ N j = 1 ∑ M n ( d i , w j ) ln p ( d i , w j )
在现有条件下,p ( d i , w j ) p(d_i, w_j) p ( d i , w j ) 无法计算。考虑到
p ( d i , w j ) = ∑ k = 1 K p ( d i , w j , z k ) p(d_i, w_j) = \sum_{k=1}^K p(d_i, w_j, z_k) p ( d i , w j ) = k = 1 ∑ K p ( d i , w j , z k )
实际上问题的关键在于上述的全概率公式如何计算。这里略去一些有关概率图模型的讨论,这样来看待这个问题。按照条件概率公式,全概率可以被写成
p ( d i , w j , z k ) = p ( w j ∣ z k , d i ) p ( z k ∣ d i ) p ( d i ) p(d_i, w_j, z_k) = p(w_j|z_k, d_i) p(z_k|d_i) p(d_i) p ( d i , w j , z k ) = p ( w j ∣ z k , d i ) p ( z k ∣ d i ) p ( d i )
再做一些假设:认为主题与文档有关,词汇仅仅与主题有关而与出现这一词汇的文档无关。于是上式变为
p ( d i , w j , z k ) = p ( w j ∣ z k ) p ( z k ∣ d i ) p ( d i ) p(d_i, w_j, z_k) = p(w_j|z_k) p(z_k|d_i) p(d_i) p ( d i , w j , z k ) = p ( w j ∣ z k ) p ( z k ∣ d i ) p ( d i )
把这一公式代入取了自然对数的可能性函数,得到了
ln L = ∑ i = 1 N n ( d i ) p ( d i ) + ∑ i = 1 N ∑ j = 1 M n ( d i , w j ) ln ∑ k = 1 K p ( w j ∣ z k ) p ( z k ∣ d i ) \ln{L} = \sum_{i=1}^N n(d_i)p(d_i) + \sum_{i=1}^N \sum_{j=1}^M n(d_i, w_j) \ln{ \sum_{k=1}^K p(w_j|z_k)p(z_k|d_i) } ln L = i = 1 ∑ N n ( d i ) p ( d i ) + i = 1 ∑ N j = 1 ∑ M n ( d i , w j ) ln k = 1 ∑ K p ( w j ∣ z k ) p ( z k ∣ d i )
上述求和式的第一部分是可以直接根据共现矩阵算出来的,对于我们的隐变量参数 Z Z Z 而言是常数,因此我们真正的目标是最大化如下的目标函数
T = ∑ i = 1 N ∑ j = 1 M n ( d i , w j ) ln ∑ k = 1 K p ( w j ∣ z k ) p ( z k ∣ d i ) T = \sum_{i=1}^N \sum_{j=1}^M n(d_i, w_j) \ln{ \sum_{k=1}^K p(w_j|z_k)p(z_k|d_i) } T = i = 1 ∑ N j = 1 ∑ M n ( d i , w j ) ln k = 1 ∑ K p ( w j ∣ z k ) p ( z k ∣ d i )
s.t.
p ( w j ∣ z k ) ≥ 0 p ( z k ∣ d i ) ≥ 0 ∑ j = 1 M p ( w j ∣ z k ) = 1 ∑ k = 1 K p ( z k ∣ d i ) = 1
\begin{align}
p(w_j|z_k) &\ge 0 \\
p(z_k|d_i) &\ge 0 \\
\sum_{j=1}^M p(w_j|z_k) &= 1 \\
\sum_{k=1}^K p(z_k|d_i) &= 1
\end{align}
p ( w j ∣ z k ) p ( z k ∣ d i ) j = 1 ∑ M p ( w j ∣ z k ) k = 1 ∑ K p ( z k ∣ d i ) ≥ 0 ≥ 0 = 1 = 1
这一函数的最大值甚至是极大值都是非常不好求的,原因在于对数里面有求和。
EM 算法是一个经典算法,用于求解函数的局部极值。这一算法的基本思想在于如果函数本身不好处理,我们转而处理函数的一个下界,这样求得一组使得函数的下界取到最值(或极值)的参数值,再重新利用这一组参数值计算得到新的下界。重复这一过程直到收敛。对于 PLSA 模型而言,具体的操作过程是这样的。
利用对数函数的凸性,根据 Jason 不等式
∑ i = 1 N ∑ j = 1 M n ( d i , w j ) ln ∑ k = 1 K ( w j ∣ z k ) p ( z k ∣ d i ) = ∑ i = 1 N ∑ j = 1 M n ( d i , w j ) ln ∑ k = 1 K Q ( z k ) p ( w j ∣ z k ) p ( z k ∣ d i ) Q ( z k ) ≥ ∑ i = 1 N ∑ j = 1 M n ( d i , w j ) ∑ k = 1 K Q ( z k ) ln p ( w j ∣ z k ) p ( z k ∣ d i ) Q ( z k ) = ∑ i = 1 N ∑ j = 1 M n ( d i , w j ) ∑ k = 1 K Q ( z k ) ln p ( w j ∣ z k ) p ( z k ∣ d i ) + c o n s t a n t
\begin{align}
&\sum_{i=1}^N \sum_{j=1}^M n(d_i, w_j) \ln{\sum_{k=1}^K(w_j|z_k)p(z_k|d_i)} \\
&= \sum_{i=1}^N \sum_{j=1}^M n(d_i, w_j)\ln{ \sum_{k=1}^K Q(z_k) \frac{p(w_j|z_k)p(z_k|d_i)}{Q(z_k)} } \\
& \ge \sum_{i=1}^N \sum_{j=1}^M n(d_i, w_j) \sum_{k=1}^K Q(z_k) \ln{\frac{p(w_j|z_k)p(z_k|d_i)}{Q(z_k)} } \\
&= \sum_{i=1}^N \sum_{j=1}^M n(d_i, w_j) \sum_{k=1}^K Q(z_k) \ln{p(w_j|z_k)p(z_k|d_i) } + constant
\end{align}
i = 1 ∑ N j = 1 ∑ M n ( d i , w j ) ln k = 1 ∑ K ( w j ∣ z k ) p ( z k ∣ d i ) = i = 1 ∑ N j = 1 ∑ M n ( d i , w j ) ln k = 1 ∑ K Q ( z k ) Q ( z k ) p ( w j ∣ z k ) p ( z k ∣ d i ) ≥ i = 1 ∑ N j = 1 ∑ M n ( d i , w j ) k = 1 ∑ K Q ( z k ) ln Q ( z k ) p ( w j ∣ z k ) p ( z k ∣ d i ) = i = 1 ∑ N j = 1 ∑ M n ( d i , w j ) k = 1 ∑ K Q ( z k ) ln p ( w j ∣ z k ) p ( z k ∣ d i ) + co n s t an t
其中 Q ( z k ) Q(z_k) Q ( z k ) 是人为给定的加权因子,对于我们的模型而言是常数,因为无论它被赋予了什么样的值,它实际上都不依赖于已经出现的任何一个概率。加权因子还有一个限制条件
∑ k = 1 K Q ( z k ) = 1 , Q ( z k ) ≥ 0 \sum_{k=1}^K Q(z_k) = 1, Q(z_k) \ge 0 k = 1 ∑ K Q ( z k ) = 1 , Q ( z k ) ≥ 0
除此之外还有之前已经提到的对于条件概率的约束条件。在这一模型中,条件概率 p ( w j ∣ z k ) , p ( z k ∣ d i ) p(w_j|z_k), p(z_k|d_i) p ( w j ∣ z k ) , p ( z k ∣ d i ) 是函数的自变量,它们有各自的求和为 1 的约束条件,而 Q ( z k ) Q(z_k) Q ( z k ) 是给定的参数,不是自变量。无论如何,现在的下界函数中的对数内部已经没有求和了,处理起来是比较方便的。带约束的函数极值的求解方法,很自然地是拉格朗日乘数法。把上述下界记作 S S S ,根据拉格朗日乘数法,得到
S + ∑ k = 1 K α k ( 1 − ∑ j = 1 M ) p ( w j ∣ z k ) + ∑ i = 1 N β i ( 1 − ∑ k = 1 K p ( z k ∣ d i ) ) S + \sum_{k=1}^K \alpha_k(1 - \sum_{j=1}^M) p(w_j|z_k) + \sum_{i=1}^N \beta_i(1 - \sum_{k=1}^K p(z_k|d_i)) S + k = 1 ∑ K α k ( 1 − j = 1 ∑ M ) p ( w j ∣ z k ) + i = 1 ∑ N β i ( 1 − k = 1 ∑ K p ( z k ∣ d i ))
利用偏导数等于0,得到了
∑ i = 1 N n ( d i , w j ) Q ( z k ) − α k p ( w j ∣ z k ) = 0 \sum_{i=1}^N n(d_i,w_j)Q(z_k) - \alpha_k p(w_j|z_k) = 0 i = 1 ∑ N n ( d i , w j ) Q ( z k ) − α k p ( w j ∣ z k ) = 0
∑ j = 1 M d ( d i , w j ) Q ( z k ) − β i p ( z k ∣ d i ) = 0 \sum_{j=1}^M d(d_i,w_j)Q(z_k) - \beta_i p(z_k|d_i) = 0 j = 1 ∑ M d ( d i , w j ) Q ( z k ) − β i p ( z k ∣ d i ) = 0
对上述第一个式子,结合 ∑ j = 1 M p ( w j ∣ z k ) = 1 \sum_{j=1}^M p(w_j|z_k) = 1 ∑ j = 1 M p ( w j ∣ z k ) = 1 ,可以得到
p ( w j ∣ z k ) = ∑ i = 1 N n ( d i , w j ) Q ( z k ) ∑ j = 1 M ∑ i = 1 N n ( d i , w j ) Q ( z k ) p(w_j|z_k) = \frac{
\sum_{i=1}^N n(d_i,w_j)Q(z_k)
}
{
\sum_{j=1}^M \sum_{i=1}^N n(d_i,w_j)Q(z_k)
} p ( w j ∣ z k ) = ∑ j = 1 M ∑ i = 1 N n ( d i , w j ) Q ( z k ) ∑ i = 1 N n ( d i , w j ) Q ( z k )
类似地可以得到
p ( z j ∣ d i ) = ∑ j = 1 M d ( d i , w j ) Q ( z k ) n ( d i ) p(z_j|d_i) = \frac{
\sum_{j=1}^M d(d_i,w_j)Q(z_k)
}{
n(d_i)
} p ( z j ∣ d i ) = n ( d i ) ∑ j = 1 M d ( d i , w j ) Q ( z k )
这样,如果我们预先给定了 Q ( z k ) Q(z_k) Q ( z k ) ,这两类条件概率的值都是可以计算得到的了。但是我们的目的不仅限于此。我们希望通过这个利用优化下界得到的条件概率可以给出一个新的下界用于迭代。所谓的新的下界,在这里就是一组新的 Q ( z k ) Q(z_k) Q ( z k ) 。这一目标要求我们可以利用 p ( w j ∣ z k ) , p ( z k ∣ d i ) p(w_j|z_k), p(z_k|d_i) p ( w j ∣ z k ) , p ( z k ∣ d i ) 来计算新的 Q ( z k ) Q(z_k) Q ( z k ) ,同时不能破坏 Q ( z k ) Q(z_k) Q ( z k ) 应当满足的限制条件。值非负并且求和为 1,很自然地想到可以使用概率分布来给 Q ( z k ) Q(z_k) Q ( z k ) 赋值。
于是,取 Q ( z k ) = p ( z k ∣ d i , w j ) Q(z_k) = p(z_k|d_i,w_j) Q ( z k ) = p ( z k ∣ d i , w j ) 。
再一次,由于 Q ( z k ) Q(z_k) Q ( z k ) 的值是人为选取的,无论把它设定成什么值,它都没有真的和上述的两组条件概率产生相关,因此上述推导中将它作为常数并没有问题。一个更加直观的理解是,我们使用已经计算得到的“条件概率的估计值”更新了这一组“未知常数的估计值”。
结合之前提到的全概率公式,有
Q ( z k ) = p ( z k ∣ d i , w j ) = p ( z k , d i , w j ) ∑ k = 1 K p ( z k , d i , w j ) = p ( z k , d i , w j ) ∑ k = 1 K p ( z k , d i , w j ) = p ( w j ∣ z k ) p ( z k ∣ d i ) p ( d i ) ∑ k = 1 K p ( w j ∣ z k ) p ( z k ∣ d i ) p ( d i ) = p ( w j ∣ z k ) p ( z k ∣ d i ) ∑ k = 1 K p ( w j ∣ z k ) p ( z k ∣ d i )
\begin{align*}
Q(z_k) &= p(z_k|d_i,w_j) \\
&= \frac{ p(z_k,d_i,w_j) }{ \sum_{k=1}^K p(z_k,d_i,w_j) } \\
&= \frac{ p(z_k, d_i, w_j) }{ \sum_{k=1}^K p(z_k, d_i, w_j) } \\
&= \frac{ p(w_j|z_k)p(z_k|d_i)p(d_i) }{ \sum_{k=1}^K p(w_j|z_k)p(z_k|d_i)p(d_i) } \\
&= \frac{ p(w_j|z_k)p(z_k|d_i) }{ \sum_{k=1}^K p(w_j|z_k)p(z_k|d_i) }
\end{align*}
Q ( z k ) = p ( z k ∣ d i , w j ) = ∑ k = 1 K p ( z k , d i , w j ) p ( z k , d i , w j ) = ∑ k = 1 K p ( z k , d i , w j ) p ( z k , d i , w j ) = ∑ k = 1 K p ( w j ∣ z k ) p ( z k ∣ d i ) p ( d i ) p ( w j ∣ z k ) p ( z k ∣ d i ) p ( d i ) = ∑ k = 1 K p ( w j ∣ z k ) p ( z k ∣ d i ) p ( w j ∣ z k ) p ( z k ∣ d i )
至此我们得到了这样一组迭代公式
Q ( z k ) = p ( w j ∣ z k ) p ( z k ∣ d i ) ∑ k = 1 K p ( w j ∣ z k ) p ( z k ∣ d i ) p ( w j ∣ z k ) = ∑ i = 1 N n ( d i , w j ) Q ( z k ) ∑ j = 1 M ∑ i = 1 N n ( d i , w j ) Q ( z k ) p ( z k ∣ d i ) = ∑ j = 1 M d ( d i , w j ) Q ( z k ) n ( d i )
\begin{align*}
Q(z_k) &= \frac{ p(w_j|z_k)p(z_k|d_i) }{ \sum_{k=1}^K p(w_j|z_k)p(z_k|d_i) } \\
p(w_j|z_k) &= \frac{\sum_{i=1}^N n(d_i,w_j)Q(z_k) }{\sum_{j=1}^M \sum_{i=1}^N n(d_i,w_j)Q(z_k) } \\
p(z_k|d_i) &=
\frac{\sum_{j=1}^M d(d_i,w_j)Q(z_k) }{n(d_i)}
\end{align*}
Q ( z k ) p ( w j ∣ z k ) p ( z k ∣ d i ) = ∑ k = 1 K p ( w j ∣ z k ) p ( z k ∣ d i ) p ( w j ∣ z k ) p ( z k ∣ d i ) = ∑ j = 1 M ∑ i = 1 N n ( d i , w j ) Q ( z k ) ∑ i = 1 N n ( d i , w j ) Q ( z k ) = n ( d i ) ∑ j = 1 M d ( d i , w j ) Q ( z k )
初始化 p ( w j ∣ z k ) , p ( z k ∣ d i ) p(w_j|z_k), p(z_k|d_i) p ( w j ∣ z k ) , p ( z k ∣ d i ) 后就可开始反复迭代直到收敛。
这样我们就可以分析出文档的主题相似度了。同样,我们可以选取文档最可能的几个主题,再把这个主题下概率最高的几个词汇选出来,就达到了关键词提取的目的。
在社交网络出现的文本,如微博,QQ 空间等,大量的都是短文本。这些文本整个文档只有几十个单词,文本长度很短。这样的文本构成的文档-词汇共现矩阵相比于文本长度而言尺寸很大,而其中绝大部分都是零。由于文档长度过短,词汇和文档的共现意义并不总是很明显,在处理社交网络上广泛出现的短文本上,上述的模型并不是很有力。
此时,文档本身的意义已经不大,我们更加应该考虑的是,不同词汇在同一文本内出现这一事实。考虑在若干主题的条件下词汇的共现。此时我们观察到的是短文本的词汇 w i w_i w i 与同一文本中的词汇 w j ′ w_j' w j ′ 共现。
此时对于一个给定的数据集,我们观察到的可能性函数为
L = ∏ i = 1 N ∏ j = 1 M p ( w i , w j ′ ) n ( w i , w j ′ ) L = \prod_{i=1}^N \prod_{j=1}^M p(w_i,w_j')^{n(w_i,w_j')} L = i = 1 ∏ N j = 1 ∏ M p ( w i , w j ′ ) n ( w i , w j ′ )
其中 n ( w i , w j ′ ) n(w_i,w_j') n ( w i , w j ′ ) 表示两个词汇共现的次数,可以取为该短文本中词汇 w i w_i w i 与 w j ′ w_j' w j ′ 各自出现次数的最小值。
引入主题隐变量 Z Z Z ,同时作一些条件独立的假设,认为词汇的出现仅仅与主题有关,而不与其余的词汇直接相关
p ( w i , w j ′ ) = ∑ k = 1 K p ( w i , w j ′ , z k ) = ∑ k = 1 K p ( w i , w j ′ ∣ z k ) p ( z k ) = ∑ k = 1 K p ( z k ) p ( w i ∣ z k ) p ( w j ′ ∣ z k )
\begin{align*}
p(w_i,w_j') &= \sum_{k=1}^K p(w_i,w_j',z_k) \\
&= \sum_{k=1}^K p(w_i,w_j'|z_k) p(z_k) \\
&= \sum_{k=1}^K p(z_k)p(w_i|z_k)p(w_j'|z_k)
\end{align*}
p ( w i , w j ′ ) = k = 1 ∑ K p ( w i , w j ′ , z k ) = k = 1 ∑ K p ( w i , w j ′ ∣ z k ) p ( z k ) = k = 1 ∑ K p ( z k ) p ( w i ∣ z k ) p ( w j ′ ∣ z k )
因此可以得到目标函数的一个下界:
ln L = ∑ i = 1 N ∑ j = 1 M n ( w i , w j ′ ) ln p ( w i , w j ′ ) = ∑ i = 1 N ∑ j = 1 M n ( w i , w j ′ ) ln [ ∑ k = 1 K p ( z k ) p ( w i ∣ z k ) p ( w j ′ ∣ z k ) ] = ∑ i = 1 N ∑ j = 1 M n ( w i , w j ′ ) ln [ ∑ k = 1 K Q ( z k ) p ( z k ) p ( w i ∣ z k ) p ( w j ′ ∣ z k ) Q ( z k ) ] ≥ ∑ i = 1 N ∑ j = 1 M n ( w i , w j ′ ) ∑ k = 1 K Q ( z k ) ln [ p ( z k ) p ( w i ∣ z k ) p ( w j ′ ∣ z k ) Q ( z k ) ] = ∑ i = 1 N ∑ j = 1 M n ( w i , w j ′ ) ∑ k = 1 K Q ( z k ) ln [ p ( z k ) p ( w i ∣ z k ) p ( w j ′ ∣ z k ) ] + c o n s t a n t
\begin{align*}
\ln{L} &= \sum_{i=1}^N \sum_{j=1}^M n(w_i,w_j') \ln{p(w_i,w_j')} \\
&= \sum_{i=1}^N \sum_{j=1}^M n(w_i,w_j') \ln{ \left[ \sum_{k=1}^K p(z_k)p(w_i|z_k)p(w_j'|z_k) \right] } \\
&= \sum_{i=1}^N \sum_{j=1}^M n(w_i,w_j') \ln{ \left[ \sum_{k=1}^K Q(z_k)\frac{p(z_k)p(w_i|z_k)p(w_j'|z_k)}{Q(z_k)} \right] } \\
&\ge \sum_{i=1}^N \sum_{j=1}^M n(w_i,w_j') \sum_{k=1}^K Q(z_k) \ln{ \left[ \frac{p(z_k)p(w_i|z_k)p(w_j'|z_k)}{Q(z_k)} \right] } \\
&= \sum_{i=1}^N \sum_{j=1}^M n(w_i,w_j') \sum_{k=1}^K Q(z_k) \ln{ \left[ p(z_k)p(w_i|z_k)p(w_j'|z_k) \right] } + constant
\end{align*}
ln L = i = 1 ∑ N j = 1 ∑ M n ( w i , w j ′ ) ln p ( w i , w j ′ ) = i = 1 ∑ N j = 1 ∑ M n ( w i , w j ′ ) ln [ k = 1 ∑ K p ( z k ) p ( w i ∣ z k ) p ( w j ′ ∣ z k ) ] = i = 1 ∑ N j = 1 ∑ M n ( w i , w j ′ ) ln [ k = 1 ∑ K Q ( z k ) Q ( z k ) p ( z k ) p ( w i ∣ z k ) p ( w j ′ ∣ z k ) ] ≥ i = 1 ∑ N j = 1 ∑ M n ( w i , w j ′ ) k = 1 ∑ K Q ( z k ) ln [ Q ( z k ) p ( z k ) p ( w i ∣ z k ) p ( w j ′ ∣ z k ) ] = i = 1 ∑ N j = 1 ∑ M n ( w i , w j ′ ) k = 1 ∑ K Q ( z k ) ln [ p ( z k ) p ( w i ∣ z k ) p ( w j ′ ∣ z k ) ] + co n s t an t
其中 ∑ k = 1 K Q ( z k ) = 1 , ∀ k Q ( z k ) ≥ 0 \sum_{k=1}^K Q(z_k) = 1, \forall k\ Q(z_k) \ge 0 ∑ k = 1 K Q ( z k ) = 1 , ∀ k Q ( z k ) ≥ 0 且 Q ( z k ) Q(z_k) Q ( z k ) 是给定的。
利用拉格朗日乘数法,定义
T = ∑ i = 1 N ∑ j = 1 M n ( w i , w j ′ ) ∑ k = 1 K Q ( z k ) ln [ p ( z k ) p ( w i ∣ z k ) p ( w j ′ ∣ z k ) ] + ∑ k = 1 K α k ( 1 − ∑ i = 1 N p ( w i ∣ z k ) ) + ∑ k = 1 K β k ( 1 − ∑ j = 1 M p ( w j ′ ∣ z k ) ) + γ ( 1 − ∑ k = 1 K p ( z k ) )
\begin{align*}
T &= \sum_{i=1}^N \sum_{j=1}^M n(w_i,w_j') \sum_{k=1}^K Q(z_k) \ln{ \left[ p(z_k)p(w_i|z_k)p(w_j'|z_k) \right] } \\
&+ \sum_{k=1}^K \alpha_k (1 - \sum_{i=1}^N p(w_i|z_k)) + \sum_{k=1}^K \beta_k (1 - \sum_{j=1}^M p(w_j'|z_k)) \\
&+ \gamma (1 - \sum_{k=1}^K p(z_k))
\end{align*}
T = i = 1 ∑ N j = 1 ∑ M n ( w i , w j ′ ) k = 1 ∑ K Q ( z k ) ln [ p ( z k ) p ( w i ∣ z k ) p ( w j ′ ∣ z k ) ] + k = 1 ∑ K α k ( 1 − i = 1 ∑ N p ( w i ∣ z k )) + k = 1 ∑ K β k ( 1 − j = 1 ∑ M p ( w j ′ ∣ z k )) + γ ( 1 − k = 1 ∑ K p ( z k ))
求偏导,得到
∂ T ∂ p ( w i ∣ z k ) = ∑ j = 1 M n ( w i , w j ′ ) Q ( z k ) p ( w i ∣ z k ) − α k = 0 \frac{\partial{T}}{\partial{p(w_i|z_k)}} = \sum_{j=1}^M n(w_i,w_j')\frac{Q(z_k)}{p(w_i|z_k)} - \alpha_k = 0 ∂ p ( w i ∣ z k ) ∂ T = j = 1 ∑ M n ( w i , w j ′ ) p ( w i ∣ z k ) Q ( z k ) − α k = 0
于是
p ( w i ∣ z k ) = ∑ j = 1 M n ( w i , w j ′ ) Q ( z k ) α k p(w_i|z_k) = \frac{\sum_{j=1}^M n(w_i,w_j')Q(z_k)}{\alpha_k} p ( w i ∣ z k ) = α k ∑ j = 1 M n ( w i , w j ′ ) Q ( z k )
利用
1 = ∑ i = 1 N p ( w i ∣ z k ) = ∑ i = 1 N ∑ j = 1 M n ( w i , w j ′ ) Q ( z k ) α k 1 = \sum_{i=1}^N p(w_i|z_k) = \sum_{i=1}^N \sum_{j=1}^M \frac{ n(w_i,w_j')Q(z_k)}{\alpha_k} 1 = i = 1 ∑ N p ( w i ∣ z k ) = i = 1 ∑ N j = 1 ∑ M α k n ( w i , w j ′ ) Q ( z k )
得到
p ( w i ∣ z k ) = ∑ j = 1 M n ( w i , w j ′ ) Q ( z k ) ∑ i = 1 N ∑ j = 1 M n ( w i , w j ′ ) Q ( z k ) p(w_i|z_k) = \frac{\sum_{j=1}^M n(w_i,w_j')Q(z_k)}{ \sum_{i=1}^N \sum_{j=1}^M n(w_i,w_j')Q(z_k)} p ( w i ∣ z k ) = ∑ i = 1 N ∑ j = 1 M n ( w i , w j ′ ) Q ( z k ) ∑ j = 1 M n ( w i , w j ′ ) Q ( z k )
类似地,利用 ∂ T ∂ p ( w j ′ ∣ z k ) = 0 \frac{\partial{T}}{\partial{p(w_j'|z_k)}} = 0 ∂ p ( w j ′ ∣ z k ) ∂ T = 0 ,可以得到
p ( w i ∣ z k ) = ∑ i = 1 N n ( w i , w j ′ ) Q ( z k ) ∑ i = 1 N ∑ j = 1 M n ( w i , w j ′ ) Q ( z k ) p(w_i|z_k) = \frac{\sum_{i=1}^N n(w_i,w_j')Q(z_k)}{ \sum_{i=1}^N \sum_{j=1}^M n(w_i,w_j')Q(z_k)} p ( w i ∣ z k ) = ∑ i = 1 N ∑ j = 1 M n ( w i , w j ′ ) Q ( z k ) ∑ i = 1 N n ( w i , w j ′ ) Q ( z k )
再利用 ∂ T ∂ p ( z k ) = 0 \frac{\partial{T}}{\partial{p(z_k)}} = 0 ∂ p ( z k ) ∂ T = 0 ,得到了
p ( z k ) = Q ( z k ) ∑ i = 1 N ∑ j = 1 M n ( w i , w j ′ ) ∑ k = 1 K Q ( z k ) ∑ i = 1 N ∑ j = 1 M n ( w i , w j ′ ) p(z_k) = \frac{Q(z_k) \sum_{i=1}^N \sum_{j=1}^M n(w_i,w_j')}{ \sum_{k=1}^K Q(z_k) \sum_{i=1}^N \sum_{j=1}^M n(w_i,w_j') } p ( z k ) = ∑ k = 1 K Q ( z k ) ∑ i = 1 N ∑ j = 1 M n ( w i , w j ′ ) Q ( z k ) ∑ i = 1 N ∑ j = 1 M n ( w i , w j ′ )
如果选取
Q ( z k ) = p ( z k ∣ w i , w j ′ ) = p ( w i , w j ′ , z k ) p ( w i , w j ′ ) = p ( z k ) p ( w i ∣ z k ) p ( w j ′ ∣ z k ) ∑ k = 1 K p ( z k ) p ( w i ∣ z k ) p ( w j ′ ∣ z k )
\begin{align*}
Q(z_k) &= p(z_k|w_i,w_j') \\
&= \frac{p(w_i,w_j',z_k)}{p(w_i,w_j')} \\
&= \frac{p(z_k)p(w_i|z_k)p(w_j'|z_k)}{\sum_{k=1}^K p(z_k)p(w_i|z_k)p(w_j'|z_k)}
\end{align*}
Q ( z k ) = p ( z k ∣ w i , w j ′ ) = p ( w i , w j ′ ) p ( w i , w j ′ , z k ) = ∑ k = 1 K p ( z k ) p ( w i ∣ z k ) p ( w j ′ ∣ z k ) p ( z k ) p ( w i ∣ z k ) p ( w j ′ ∣ z k )
就得到了如下的一组 EM 算法迭代公式
p ( w i ∣ z k ) = ∑ j = 1 M n ( w i , w j ′ ) Q ( z k ) ∑ i = 1 N ∑ j = 1 M n ( w i , w j ′ ) Q ( z k ) p ( w i ∣ z k ) = ∑ i = 1 N n ( w i , w j ′ ) Q ( z k ) ∑ i = 1 N ∑ j = 1 M n ( w i , w j ′ ) Q ( z k ) p ( z k ) = Q ( z k ) ∑ i = 1 N ∑ j = 1 M n ( w i , w j ′ ) ∑ k = 1 K Q ( z k ) ∑ i = 1 N ∑ j = 1 M n ( w i , w j ′ ) Q ( z k ) = p ( z k ) p ( w i ∣ z k ) p ( w j ′ ∣ z k ) ∑ k = 1 K p ( z k ) p ( w i ∣ z k ) p ( w j ′ ∣ z k )
\begin{align*}
p(w_i|z_k) &= \frac{\sum_{j=1}^M n(w_i,w_j')Q(z_k)}{ \sum_{i=1}^N \sum_{j=1}^M n(w_i,w_j')Q(z_k)} \\
p(w_i|z_k) &= \frac{\sum_{i=1}^N n(w_i,w_j')Q(z_k)}{ \sum_{i=1}^N \sum_{j=1}^M n(w_i,w_j')Q(z_k)} \\
p(z_k) &= \frac{Q(z_k) \sum_{i=1}^N \sum_{j=1}^M n(w_i,w_j')}{ \sum_{k=1}^K Q(z_k) \sum_{i=1}^N \sum_{j=1}^M n(w_i,w_j') } \\
Q(z_k) &= \frac{p(z_k)p(w_i|z_k)p(w_j'|z_k)}{\sum_{k=1}^K p(z_k)p(w_i|z_k)p(w_j'|z_k)} \end{align*}
p ( w i ∣ z k ) p ( w i ∣ z k ) p ( z k ) Q ( z k ) = ∑ i = 1 N ∑ j = 1 M n ( w i , w j ′ ) Q ( z k ) ∑ j = 1 M n ( w i , w j ′ ) Q ( z k ) = ∑ i = 1 N ∑ j = 1 M n ( w i , w j ′ ) Q ( z k ) ∑ i = 1 N n ( w i , w j ′ ) Q ( z k ) = ∑ k = 1 K Q ( z k ) ∑ i = 1 N ∑ j = 1 M n ( w i , w j ′ ) Q ( z k ) ∑ i = 1 N ∑ j = 1 M n ( w i , w j ′ ) = ∑ k = 1 K p ( z k ) p ( w i ∣ z k ) p ( w j ′ ∣ z k ) p ( z k ) p ( w i ∣ z k ) p ( w j ′ ∣ z k )
先等等。
(在咕咕咕了一万年之后我终于想起了这里还有坑没有填)
让我们回到 EM 算法开始之前,我们最本质的目标在于求这样一个函数的极大值。
T = ∑ i = 1 N ∑ j = 1 M n ( d i , w j ) ln ∑ k = 1 K p ( w j ∣ z k ) p ( z k ∣ d i ) T = \sum_{i=1}^N \sum_{j=1}^M n(d_i, w_j) \ln{ \sum_{k=1}^K p(w_j|z_k)p(z_k|d_i) } T = i = 1 ∑ N j = 1 ∑ M n ( d i , w j ) ln k = 1 ∑ K p ( w j ∣ z k ) p ( z k ∣ d i )
s.t.
p ( w j ∣ z k ) ≥ 0 p ( z k ∣ d i ) ≥ 0 ∑ j = 1 M p ( w j ∣ z k ) = 1 ∑ k = 1 K p ( z k ∣ d i ) = 1
\begin{align*}
p(w_j|z_k) &\ge 0 \\
p(z_k|d_i) &\ge 0 \\
\sum_{j=1}^M p(w_j|z_k) &= 1 \\
\sum_{k=1}^K p(z_k|d_i) &= 1
\end{align*}
p ( w j ∣ z k ) p ( z k ∣ d i ) j = 1 ∑ M p ( w j ∣ z k ) k = 1 ∑ K p ( z k ∣ d i ) ≥ 0 ≥ 0 = 1 = 1
因为对数里面存在求和,常规的求导之类的技巧没有用,所以我们使用了 EM 算法来解决它。但是实际上我们也可以这样来看待这个问题: 我们并不需要导函数的解析形式,只需要求导然后计算它的值(尤其是符号),就可以利用梯度下降的方法来求极值了。这样的话就只剩下一个问题了: 我们的函数是带限制的,不是在全空间中。
带限制这个问题是很好解决的,只需要做一个简单的变换。假设文档,主题,词汇都是由它们各自的隐向量决定的,上述概率可以通过隐向量来计算。
记 d i d_i d i 的隐向量为 d i ′ → \overrightarrow{d_i'} d i ′ ,w j w_j w j 的隐向量为 w j ′ → \overrightarrow{w_j'} w j ′ ,
z k z_k z k 的为z k ′ → \overrightarrow{z_k'} z k ′ 。并使用softmax函数计算上述式子中的条件变量:
p ( w j ∣ z k ) = exp ( w j ′ → T z k ′ → ) ∑ j = 1 M exp ( w j ′ → T z k ′ → ) p ( z k ∣ d i ) = exp ( z k ′ → T d i ′ → ) ∑ k = 1 K exp ( z k ′ → T d i ′ → )
\begin{align*}
p(w_j|z_k) &= \frac{ \exp{( \overrightarrow{w_j'}^T \overrightarrow{z_k'} )} } { \sum_{j=1}^M \exp{( \overrightarrow{w_j'}^T \overrightarrow{z_k'} )} } \\
p(z_k|d_i) &= \frac{ \exp{( \overrightarrow{z_k'}^T \overrightarrow{d_i'} )} } { \sum_{k=1}^K \exp{( \overrightarrow{z_k'}^T \overrightarrow{d_i'} )} }
\end{align*}
p ( w j ∣ z k ) p ( z k ∣ d i ) = ∑ j = 1 M exp ( w j ′ T z k ′ ) exp ( w j ′ T z k ′ ) = ∑ k = 1 K exp ( z k ′ T d i ′ ) exp ( z k ′ T d i ′ )
得到了目标函数的新形式:
T = ∑ i = 1 N ∑ j = 1 M n ( d i , w j ) ln ∑ k = 1 K exp ( w j ′ → T z k ′ → ) ∑ j = 1 M exp ( w j ′ → T z k ′ → ) exp ( z k ′ → T d i ′ → ) ∑ k = 1 K exp ( z k ′ → T d i ′ → )
T = \sum_{i=1}^N \sum_{j=1}^M n(d_i, w_j) \ln{ \sum_{k=1}^K
\frac{ \exp{( \overrightarrow{w_j'}^T \overrightarrow{z_k'} )} }
{ \sum_{j=1}^M \exp{( \overrightarrow{w_j'}^T \overrightarrow{z_k'} )} }
\frac{ \exp{( \overrightarrow{z_k'}^T \overrightarrow{d_i'} )} }
{ \sum_{k=1}^K \exp{( \overrightarrow{z_k'}^T \overrightarrow{d_i'} )} }
}
T = i = 1 ∑ N j = 1 ∑ M n ( d i , w j ) ln k = 1 ∑ K ∑ j = 1 M exp ( w j ′ T z k ′ ) exp ( w j ′ T z k ′ ) ∑ k = 1 K exp ( z k ′ T d i ′ ) exp ( z k ′ T d i ′ )
限制条件被自然地满足,隐向量取值空间为全空间。