小马哥课堂-统计学-无偏估计

定义引入

以之前的200,000苹果的重量为例,200,000苹果(或者更多)的重量都统计出来,不太现实。但我们可以先随机抽取100个苹果(作为一个样本),统计这100个苹果的平均重量,记为。如果只是把作为总体的平均值,肯定是不准确的,因为再次随机抽取100个苹果,计算出的重量的平均值和肯定就不一样了。那么,为了使统计结果更加准确,我们需要反复抽取多次,然后分别计算出每个样本的平均值,分别记为:,接着把这些数据再做平均,记为:。那么,随着反复抽样次数的增多,会趋于总体期望。如果成立,那么就是总体期望的无偏估计。

定义

无偏估计是用样本统计量来估计总体参数时的一种无偏推断。估计量的数学期望等于被估计参数的真实值,则称此估计量为被估计参数的无偏估计,即具有无偏性,是一种用于评价估计量优良性的准则。无偏估计的意义是:在多次重复下,它们的平均数接近所估计的参数真值。

In statistics, the bias (or bias function) of an estimator is the difference between this estimator's expected value and the true value of the parameter being estimated. An estimator or decision rule with zero bias is called unbiased. Otherwise the estimator is said to be biased.

样本方差无偏性的证明

总体期望:

总体方差:

样本均值:

样本方差:

为什么 要除以n-1, 才使得 样本方差 是 总体方差 的无偏估计?为什么除以n,样本方差比总体方差的值偏小?为什么要调大样本方差是除以n-1,而不是n-2,n-3或者其他数?

假设,样本方差定义为:,根据无偏估计的定义:,那么

\begin{array}{rcl} E[S^2]&=&E[\frac{1}{n}\cdot \displaystyle\sum_{i=1}^n (x_i-\overline x)^2] \\ &=&E[\frac{1}{n}\cdot \displaystyle\sum_{i=1}^n [(x_i-\mu)+(\mu-\overline x)]^2] \\ &=& E[\frac{1}{n}\cdot\displaystyle\sum_{i=1}^n [(x_i-\mu)^2+2(x_i-\mu)(\mu-\overline x)+(\mu-\overline x)^2]] \\&=& E[\frac{1}{n}\cdot\displaystyle\sum_{i=1}^n (x_i-\mu)^2]+E[\frac{2}{n} \cdot \displaystyle\sum_{i=1}^n(x_i-\mu)(\mu-\overline x)] + E[\frac{1}{n}\cdot\sum_{i=1}^n(\mu-\overline x)^2] \qquad (1)\\ &=& E[\frac{1}{n}\cdot\displaystyle\sum_{i=1}^n (x_i-\mu)^2]+ \end{array}

\begin{array}{rcl} \frac{2}{n}\cdot\displaystyle\sum_{i=1}^n(x_i-\mu)(\mu-\overline x) &=&\frac{2}{n}\cdot(\mu-\overline x)\cdot\displaystyle\sum_{i=1}^n(x_i-\mu) \\ &=&2\cdot(\mu-\overline x)\cdot \frac{1}{n}\displaystyle\sum_{i=1}^n(x_i-\mu) \qquad (\overline x=\frac{1}{n} \cdot \displaystyle\sum_{i=1}^n x_i) \\&=&2\cdot(\mu-\overline x)\cdot(\overline x-\mu) \end{array}

\begin{array}{rcl} E[S^2]&=&E[\frac{1}{n}\cdot\displaystyle\sum_{i=1}^n (x_i-\mu)^2]+E[\frac{2}{n} \cdot \displaystyle\sum_{i=1}^n(x_i-\mu)(\mu-\overline x)] + E[\frac{1}{n}\cdot\sum_{i=1}^n(\mu-\overline x)^2] \qquad (1)\\ &=& E[\frac{1}{n}\cdot\displaystyle\sum_{i=1}^n (x_i-\mu)^2]-E[2(\overline x - \mu)^2] + E[(\mu-\overline x)^2] \qquad \text{,根据期望的性质E[C]=C} \\ &=&E[\frac{1}{n}\cdot\displaystyle\sum_{i=1}^n (x_i-\mu)^2] - E[(\mu-\overline x)^2] \qquad (2)\end{array}

根据总体方差的定义:
总体x服从
样本均值的抽样分布服从,那么
所以当样本方差的分母是n时,样本方差总小于总体方差。如果我们将分母n替换为n-1,似乎就是无偏估计了,那么,到底是不是这样呢,下面再推导一遍:

我们知道,样本方差:

\begin{array}{rcl} E[S^2]&=&E[\frac{1}{n-1}\cdot \displaystyle\sum_{i=1}^n (x_i-\overline x)^2] \\ &=&E[\frac{1}{n-1}\cdot \displaystyle\sum_{i=1}^n [(x_i-\mu)+(\mu-\overline x)]^2] \\ &=& E[\frac{1}{n-1}\cdot\displaystyle\sum_{i=1}^n [(x_i-\mu)^2+2(x_i-\mu)(\mu-\overline x)+(\mu-\overline x)^2]] \\&=& E[\frac{1}{n-1}\cdot\displaystyle\sum_{i=1}^n (x_i-\mu)^2]+E[\frac{2}{n-1} \cdot \displaystyle\sum_{i=1}^n(x_i-\mu)(\mu-\overline x)] + E[\frac{1}{n-1}\cdot\sum_{i=1}^n(\mu-\overline x)^2] \qquad (3)\\ &=& \end{array}

\begin{array}{rcl} \frac{2}{n-1}\cdot\displaystyle\sum_{i=1}^n(x_i-\mu)(\mu-\overline x) &=&\frac{2}{n-1}\cdot(\mu-\overline x)\cdot\displaystyle\sum_{i=1}^n(x_i-\mu) \\ &=&\frac{2}{n-1}\cdot(\mu-\overline x)\cdot \frac{n}{n}\displaystyle\sum_{i=1}^n(x_i-\mu) \qquad (\overline x=\frac{1}{n} \cdot \displaystyle\sum_{i=1}^n x_i) \\&=&\frac{2n}{n-1}\cdot(\mu-\overline x)\cdot(\overline x-\mu) \end{array}

\begin{array}{rcl} E[S^2]&=&E[\frac{1}{n-1}\cdot\displaystyle\sum_{i=1}^n (x_i-\mu)^2]+E[\frac{2n}{n-1} \cdot \displaystyle\sum_{i=1}^n(x_i-\mu)(\mu-\overline x)] + E[\frac{1}{n-1}\cdot\sum_{i=1}^n(\mu-\overline x)^2] \qquad (1)\\ &=& E[\frac{1}{n-1}\cdot\displaystyle\sum_{i=1}^n (x_i-\mu)^2]-\frac{2n}{n-1}E[(\overline x - \mu)^2] + \frac{n}{n-1}E[(\mu-\overline x)^2] \qquad \text{,根据期望的性质E[C]=C} \\ &=&\frac{n}{n-1}\cdot E[\frac{1}{n}\cdot\displaystyle\sum_{i=1}^n (x_i-\mu)^2] -\frac{n}{n-1} E[(\mu-\overline x)^2] \qquad \\&=&\frac{n}{n-1}\cdot\sigma^2-\frac{n}{n-1}\cdot\frac{\sigma^2}{n} \\ &=&\frac{n-1}{n-1}\cdot\sigma^2\end{array}

所以呢,,是总体方差的无偏估计。

贝塞尔校正(Bessel's correction)

通常,称为贝塞尔校正系数。有的文献上也将也称为贝塞尔校正系数

In statistics, Bessel's correction is the use of n − 1 instead of n in the formula for the sample variance and sample standard deviation, where n is the number of observations in a sample.This method corrects the bias in the estimation of the population variance. It also partially corrects the bias in the estimation of the population standard deviation.

你可能感兴趣的:(小马哥课堂-统计学-无偏估计)