极大似然估计-七月算法(julyedu.com)4 月机器学习算法班学习笔记

  • 从贝叶斯看极大似然估计
  • 极大似然估计
  • 幂律分布

以下内容摘抄于七月算法(julyedu.com)4 月机器学习算法班课堂讲义

从贝叶斯看极大似然估计

由贝叶斯公式可得
maxP(Ai|D)=maxP(D|Ai)P(Ai)P(D)
其中 P(D) 是客观的,可视为常量
maxP(D|Ai)P(Ai)
我们假设 Ai 服从均分布,则
maxP(D|Ai)

从上面推导可以得出
maxP(Ai|D)maxP(D|Ai)
也就是说,我们原来的目标是求 maxP(Ai|D) ,我们可以通过求 maxP(D|Ai) 来近似求得

极大似然估计

幂律分布

幂律分布(Power Laws)又称长尾分布

f(x)=axk+o(xk)


An example power-law graph, being used to demonstrate ranking of popularity. To the right is the long tail, and to the left are the few that dominate (also known as the 80–20 rule).

Let y = number of sites that were visited by x users.
In a power-law we have y=Cxa which means that log(y)=log(C)alog(x)
So a power-law with exponent a is seen as a straight line with slope a on a log-log plot.

你可能感兴趣的:(机器学习)