关于KL距离(KL Divergence)

作者:覃含章
链接:https://www.zhihu.com/question/29980971/answer/103807952
来源:知乎
著作权归作者所有。商业转载请联系作者获得授权,非商业转载请注明出处。

最早KL divergence就是从信息论里引入的,不过既然题主问的是ML中的应用,就不多做具体介绍。只是简单概述给定真实概率分布P和近似分布Q,KL divergence所表达的就是如果我们用一套最优的压缩机制(compression scheme)来储存Q的分布,对每个从P来的sample我需要多用的bits(相比我直接用一套最优的压缩机制来储存P的分布)。这也叫做 Kraft–McMillan theorem。

所以很自然的它可以被用作统计距离,因为它本身内在的概率意义。然而,也正因为这种意义,题主所说的不对称性是不可避免的。因为D(P||Q)和D(Q||P)回答的是基于不同压缩机制下的“距离”问题。

至于general的统计距离,当然,它们其实没有本质差别。更广泛的来看,KL divergence可以看成是phi-divergence的一种特殊情况(phi取log)。注意下面的定义是针对discrete probability distribution,但是把sum换成integral很自然可以定义连续版本的。

关于KL距离(KL Divergence)_第1张图片 关于KL距离(KL Divergence)_第2张图片用其它的divergence理论来做上是没有本质区别的,只要phi是convex, closed的。
关于KL距离(KL Divergence)_第3张图片 关于KL距离(KL Divergence)_第4张图片因为它们都有相似的概率意义,比如说pinsker's theorem保证了KL-divergence是total variation metric的一个tight bound. 其它divergence metric应该也有类似的bound,最多就是order和常数会差一些。而且,用这些divergence定义的minimization问题也都会是convex的,但是具体的computation performance可能会有差别,所以KL还是用的多。
关于KL距离(KL Divergence)_第5张图片 关于KL距离(KL Divergence)_第6张图片
Reference: Bayraksan G, Love DK. Data-Driven Stochastic Programming Using Phi-Divergences.

作者:知乎用户
链接:https://www.zhihu.com/question/29980971/answer/93489660
来源:知乎
著作权归作者所有。商业转载请联系作者获得授权,非商业转载请注明出处。

KL divergence KL(p||q), in the context of information theory, measures the amount of extra bits (nats) that is necessary to describe samples from the distribution p with coding based on q instead of p itself. From the Kraft-Macmillan theorem, we know that the coding scheme for one value out of a set X can be represented q(x) = 2^(-l_i) as over X, where l_i is the length of the code for x_i in bits.

We know that KL divergence is also the relative entropy between two distributions, and that gives some intuition as to why in it's used in variational methods. Variational methods use functionals as measures in its objective function (i.e. entropy of a distribution takes in a distribution and return a scalar quantity). It's interpreted as the "loss of information" when using one distribution to approximate another, and is desirable in machine learning due to the fact that in models where dimensionality reduction is used, we would like to preserve as much information of the original input as possible. This is more obvious when looking at VAEs which use the KL divergence between the posterior q and prior p distribution over the latent variable z. Likewise, you can refer to EM, where we decompose

ln p(X) = L(q) + KL(q||p)

Here we maximize the lower bound on L(q) by minimizing the KL divergence, which becomes 0 when p(Z|X) = q(Z). However, in many cases, we wish to restrict the family of distributions and parameterize q(Z) with a set of parameters w, so we can optimize w.r.t. w.

Note that KL(p||q) = - \sum p(Z) ln (q(Z) / p(Z)), and so KL(p||q) is different from KL(q||p). This asymmetry, however, can be exploited in the sense that in cases where we wish to learn the parameters of a distribution q that over-compensates for p, we can minimize KL(p||q). Conversely when we wish to seek just the main components of p with q distribution, we can minimize KL(q||p). This example from the Bishop book illustrates this well.
关于KL距离(KL Divergence)_第7张图片 关于KL距离(KL Divergence)_第8张图片
KL divergence belongs to an alpha family of divergences, where the parameter alpha takes on separate limits for the forward and backwards KL. When alpha = 0, it becomes symmetric, and linearly related to the Hellinger distance. There are other metrics such as the Cauchy Schwartz divergence which are symmetric, but in machine learning settings where the goal is to learn simpler, tractable parameterizations of distributions which approximate a target, they might not be as useful as KL.

你可能感兴趣的:(机器学习)