期望、方差的线性关系证明

尝试模仿gbl的笔法(bushi
注:若事件有无穷种结果,这种方法可能不严谨。

假设有两个相互独立事件:
事件1: P ( A = a i ) = x i ( i = 1 , 2 , . . . , n ) P(A=a_i)=x_i (i=1,2,...,n) P(A=ai)=xi(i=1,2,...,n) A A A的期望为 E ( A ) E(A) E(A),方差为 D ( A ) D(A) D(A)
事件2: P ( B = b i ) = y i ( j = 1 , 2 , . . . , m ) P(B=b_i)=y_i (j=1,2,...,m) P(B=bi)=yi(j=1,2,...,m) B B B的期望为 E ( B ) E(B) E(B),方差为 D ( B ) D(B) D(B)
C = A + B C=A+B C=A+B
E ( C ) = E ( A ) + E ( B ) , D ( C ) = D ( A ) + D ( B ) E(C)=E(A)+E(B),D(C)=D(A)+D(B) E(C)=E(A)+E(B),D(C)=D(A)+D(B)

分析:随机事件的性质说明 ∑ i = 1 n x i = 1 \sum\limits_{i=1}^{n}x_i=1 i=1nxi=1 ∑ i = 1 m y i = 1 \sum\limits_{i=1}^{m}y_i=1 i=1myi=1
由期望的定义, E ( A ) = ∑ i = 1 n x i a i E(A)=\sum\limits_{i=1}^{n}x_ia_i E(A)=i=1nxiai E ( B ) = ∑ i = 1 m y i b i E(B)=\sum\limits_{i=1}^{m}y_ib_i E(B)=i=1myibi
因为两事件独立,所以当 A , B A,B A,B确定时(设 A = a i A=a_i A=ai B = b j B=b_j B=bj), C C C在这种情况下的取值的概率是可以求出的,为 x i y j x_iy_j xiyj
于是可以枚举 i , j i,j i,j,并计算此时情况下对总期望的贡献。化简可以提取公因式,再用上面四个等式简化表达式。
值得注意的是,若两组数 ( i , j ) , ( i ′ , j ′ ) (i,j),(i',j') (i,j),(i,j)不同,且 a i + b j = a i ′ + b j ′ a_i+b_j=a_{i'}+b_{j'} ai+bj=ai+bj,同样可以分别计算贡献。
因为期望的定义式说明可以把 a i + b j a_i+b_j ai+bj提取公因式,再把概率相加。而“把概率相加”就等价于把 C C C的每一个确定取值列出来,再把每一个取值的概率算出来,最后算期望。
所以分别计算贡献,和把所有的 C C C的可能取值列出来再求期望,结果是相同的。

(我把过程写数学书上了,但忘把数学书带回来了,又要重推一遍)
在这里插入图片描述
解: E ( C ) = ∑ i = 1 n ∑ j = 1 m x i y j ( a i + b j ) E(C)=\sum\limits_{i=1}^{n}\sum\limits_{j=1}^{m}x_iy_j(a_i+b_j) E(C)=i=1nj=1mxiyj(ai+bj)
= ∑ i = 1 n ∑ j = 1 m x i a i y j + ∑ i = 1 n ∑ j = 1 m x i y j b j =\sum\limits_{i=1}^{n}\sum\limits_{j=1}^{m}x_ia_iy_j+\sum\limits_{i=1}^{n}\sum\limits_{j=1}^{m}x_iy_jb_j =i=1nj=1mxiaiyj+i=1nj=1mxiyjbj
= ∑ i = 1 n x i a i ∑ j = 1 m y j + ∑ i = 1 n x i ∑ j = 1 m y j b j =\sum\limits_{i=1}^{n}x_ia_i\sum\limits_{j=1}^{m}y_j+\sum\limits_{i=1}^{n}x_i\sum\limits_{j=1}^{m}y_jb_j =i=1nxiaij=1myj+i=1nxij=1myjbj
= ∑ i = 1 n x i a i + ∑ i = 1 n x i E ( B ) =\sum\limits_{i=1}^{n}x_ia_i+\sum\limits_{i=1}^{n}x_iE(B) =i=1nxiai+i=1nxiE(B)
= E ( A ) + E ( B ) =E(A)+E(B) =E(A)+E(B)

期望还是很友好的,毒瘤的是方差……
在这里插入图片描述在这里插入图片描述在这里插入图片描述
分析:同样的思路,只是多了两个定义式:
D ( A ) = ∑ i = 1 n x i ( a i − E ( A ) ) 2 , D ( B ) = ∑ i = 1 m y i ( b i − E ( B ) ) 2 D(A)=\sum\limits_{i=1}^{n}x_i(a_i-E(A))^2,D(B)=\sum\limits_{i=1}^{m}y_i(b_i-E(B))^2 D(A)=i=1nxi(aiE(A))2,D(B)=i=1myi(biE(B))2
这里依然可以枚举 i , j i,j i,j并计算对总方差的贡献。当两组不同的 ( i , j ) (i,j) (i,j) a i + b j a_i+b_j ai+bj相同时,由于 E ( C ) E(C) E(C)是确定的,所以同样可以提取公因式然后将概率相加。
但方差中有平方,所以计算过程会很毒瘤——

解:上面已证 E ( C ) = E ( A ) + E ( B ) E(C)=E(A)+E(B) E(C)=E(A)+E(B)。于是
D ( C ) = ∑ i = 1 n ∑ j = 1 m x i y j ( a i + b j − E ( A ) − E ( B ) ) 2 D(C)=\sum\limits_{i=1}^{n}\sum\limits_{j=1}^{m}x_iy_j(a_i+b_j-E(A)-E(B))^2 D(C)=i=1nj=1mxiyj(ai+bjE(A)E(B))2
= ∑ i = 1 n ∑ j = 1 m x i y j [ ( a i − E ( A ) ) + ( b j − E ( B ) ) ] 2 =\sum\limits_{i=1}^{n}\sum\limits_{j=1}^{m}x_iy_j[(a_i-E(A))+(b_j-E(B))]^2 =i=1nj=1mxiyj[(aiE(A))+(bjE(B))]2
= ∑ i = 1 n ∑ j = 1 n x i y j [ a i − E ( A ) ] 2 + ∑ i = 1 n ∑ j = 1 m x i y j [ b j − E ( B ) ] 2 + 2 ∑ i = 1 n ∑ j = 1 m x i y j [ a i − E ( A ) ] [ b j − E ( B ) ] =\sum\limits_{i=1}^{n}\sum\limits_{j=1}^{n}x_iy_j[a_i-E(A)]^2+\sum\limits_{i=1}^{n}\sum\limits_{j=1}^{m}x_iy_j[b_j-E(B)]^2+2\sum\limits_{i=1}^{n}\sum\limits_{j=1}^{m}x_iy_j[a_i-E(A)][b_j-E(B)] =i=1nj=1nxiyj[aiE(A)]2+i=1nj=1mxiyj[bjE(B)]2+2i=1nj=1mxiyj[aiE(A)][bjE(B)]
= ∑ i = 1 n x i [ a i − E ( A ) ] 2 ∑ j = 1 m y j + ∑ i = 1 n x i ∑ j = 1 m y j [ b j − E ( B ) ] 2 + 2 ∑ i = 1 n ∑ j = 1 m x i y j [ a i − E ( A ) ] [ b j − E ( B ) ] =\sum\limits_{i=1}^{n}x_i[a_i-E(A)]^2\sum\limits_{j=1}^{m}y_j+\sum\limits_{i=1}^{n}x_i\sum\limits_{j=1}^{m}y_j[b_j-E(B)]^2+2\sum\limits_{i=1}^{n}\sum\limits_{j=1}^{m}x_iy_j[a_i-E(A)][b_j-E(B)] =i=1nxi[aiE(A)]2j=1myj+i=1nxij=1myj[bjE(B)]2+2i=1nj=1mxiyj[aiE(A)][bjE(B)]
= ∑ i = 1 n x i [ a i − E ( A ) ] 2 + ∑ i = 1 n x i D ( B ) + 2 ∑ i = 1 n ∑ j = 1 m x i y j [ a i − E ( A ) ] [ b j − E ( B ) ] =\sum\limits_{i=1}^{n}x_i[a_i-E(A)]^2+\sum\limits_{i=1}^{n}x_iD(B)+2\sum\limits_{i=1}^{n}\sum\limits_{j=1}^{m}x_iy_j[a_i-E(A)][b_j-E(B)] =i=1nxi[aiE(A)]2+i=1nxiD(B)+2i=1nj=1mxiyj[aiE(A)][bjE(B)]
= D ( A ) + D ( B ) + 2 ∑ i = 1 n ∑ j = 1 m x i y j [ a i − E ( A ) ] [ b j − E ( B ) ] =D(A)+D(B)+2\sum\limits_{i=1}^{n}\sum\limits_{j=1}^{m}x_iy_j[a_i-E(A)][b_j-E(B)] =D(A)+D(B)+2i=1nj=1mxiyj[aiE(A)][bjE(B)]

∑ i = 1 n ∑ j = 1 m x i y j [ a i − E ( A ) ] [ b j − E ( B ) ] \sum\limits_{i=1}^{n}\sum\limits_{j=1}^{m}x_iy_j[a_i-E(A)][b_j-E(B)] i=1nj=1mxiyj[aiE(A)][bjE(B)]
= ∑ i = 1 n ∑ j = 1 m x i a i y j b j − ∑ i = 1 n ∑ j = 1 m x i y j a i E ( B ) − ∑ i = 1 n ∑ j = 1 m x i y j b j E ( A ) + ∑ i = 1 n ∑ j = 1 m x i y j E ( A ) E ( B ) =\sum\limits_{i=1}^{n}\sum\limits_{j=1}^{m}x_ia_iy_jb_j-\sum\limits_{i=1}^{n}\sum\limits_{j=1}^{m}x_iy_ja_iE(B)-\sum\limits_{i=1}^{n}\sum\limits_{j=1}^{m}x_iy_jb_jE(A)+\sum\limits_{i=1}^{n}\sum\limits_{j=1}^{m}x_iy_jE(A)E(B) =i=1nj=1mxiaiyjbji=1nj=1mxiyjaiE(B)i=1nj=1mxiyjbjE(A)+i=1nj=1mxiyjE(A)E(B)
= ∑ i = 1 n x i a i ∑ j = 1 m y j b j − E ( B ) ∑ i = 1 n x i a i ∑ j = 1 m y j − E ( A ) ∑ i = 1 n x i ∑ j = 1 m y j b j + E ( A ) E ( B ) ∑ i = 1 n x i ∑ j = 1 m y j =\sum\limits_{i=1}^{n}x_ia_i\sum\limits_{j=1}^{m}y_jb_j-E(B)\sum\limits_{i=1}^{n}x_ia_i\sum\limits_{j=1}^{m}y_j-E(A)\sum\limits_{i=1}^{n}x_i\sum\limits_{j=1}^{m}y_jb_j+E(A)E(B)\sum\limits_{i=1}^{n}x_i\sum\limits_{j=1}^{m}y_j =i=1nxiaij=1myjbjE(B)i=1nxiaij=1myjE(A)i=1nxij=1myjbj+E(A)E(B)i=1nxij=1myj
= ∑ i = 1 n x i a i E ( B ) − E ( B ) ∑ i = 1 n x i a i − E ( A ) ∑ i = 1 n x i E ( B ) + E ( A ) E ( B ) ∑ i = 1 n x i =\sum\limits_{i=1}^{n}x_ia_iE(B)-E(B)\sum\limits_{i=1}^{n}x_ia_i-E(A)\sum\limits_{i=1}^{n}x_iE(B)+E(A)E(B)\sum\limits_{i=1}^{n}x_i =i=1nxiaiE(B)E(B)i=1nxiaiE(A)i=1nxiE(B)+E(A)E(B)i=1nxi
= E ( A ) E ( B ) − E ( A ) E ( B ) − E ( A ) E ( B ) + E ( A ) E ( B ) =E(A)E(B)-E(A)E(B)-E(A)E(B)+E(A)E(B) =E(A)E(B)E(A)E(B)E(A)E(B)+E(A)E(B)
= 0 =0 =0

D ( C ) = D ( A ) + D ( B ) + 0 = D ( A ) + D ( B ) D(C)=D(A)+D(B)+0=D(A)+D(B) D(C)=D(A)+D(B)+0=D(A)+D(B)

后记:
这种做法太暴力了……应该会有更妙的证法吧?
推之前我只知道期望有线性性,没想到方差也有
课本上只给了二项分布的期望和方差,而方差没有证明,期望是通过组合数的特殊性质证明的。
感觉课本上那样的证法也不是很好,那个只适合二项分布,而且证明也不需要用组合数性质。
但因为笔者能力有限,想不出更巧妙的做法,所以期待各位巨佬的指点。
另附二项分布期望和方差公式:
E ( X ) = n p E(X)=np E(X)=np
D ( X ) = n p ( 1 − p ) D(X)=np(1-p) D(X)=np(1p)

后后记(?):
巨佬mhy给了一个jcl讲的奇妙公式: D ( X ) = E ( X 2 ) − E ( X ) 2 D(X)=E(X^2)-E(X)^2 D(X)=E(X2)E(X)2
证明: D ( X ) = ∑ i = 1 n p i ( x i − E ( X ) ) 2 D(X)=\sum\limits_{i=1}^{n}p_i(x_i-E(X))^2 D(X)=i=1npi(xiE(X))2
= ∑ i = 1 n p i x i 2 − ∑ i = 1 n 2 p i x i E ( X ) + ∑ i = 1 n p i E ( X ) 2 =\sum\limits_{i=1}^{n}p_ix_i^2-\sum\limits_{i=1}^{n}2p_ix_iE(X)+\sum\limits_{i=1}^{n}p_iE(X)^2 =i=1npixi2i=1n2pixiE(X)+i=1npiE(X)2
= E ( X 2 ) − 2 E ( X ) ∑ i = 1 n p i x i + E ( X ) 2 ∑ i = 1 n p i =E(X^2)-2E(X)\sum\limits_{i=1}^{n}p_ix_i+E(X)^2\sum\limits_{i=1}^{n}p_i =E(X2)2E(X)i=1npixi+E(X)2i=1npi
= E ( X 2 ) − 2 E ( X ) 2 + E ( X ) 2 =E(X^2)-2E(X)^2+E(X)^2 =E(X2)2E(X)2+E(X)2
= E ( X 2 ) − E ( X ) 2 =E(X^2)-E(X)^2 =E(X2)E(X)2
于是再次证明 D ( A + B ) = D ( A ) + D ( B ) D(A+B)=D(A)+D(B) D(A+B)=D(A)+D(B)
D ( A + B ) = E ( ( A + B ) 2 ) − E ( A + B ) 2 D(A+B)=E((A+B)^2)-E(A+B)^2 D(A+B)=E((A+B)2)E(A+B)2
= E ( A 2 + B 2 + 2 A B ) − [ E ( A ) + E ( B ) ] 2 =E(A^2+B^2+2AB)-[E(A)+E(B)]^2 =E(A2+B2+2AB)[E(A)+E(B)]2
= E ( A 2 ) + E ( B 2 ) + 2 E ( A B ) − E ( A ) 2 − E ( B ) 2 − 2 E ( A B ) =E(A^2)+E(B^2)+2E(AB)-E(A)^2-E(B)^2-2E(AB) =E(A2)+E(B2)+2E(AB)E(A)2E(B)22E(AB)
= [ E ( A 2 ) − E ( A ) ] + [ E ( B 2 ) − E ( B ) ] =[E(A^2)-E(A)]+[E(B^2)-E(B)] =[E(A2)E(A)]+[E(B2)E(B)]
= D ( A ) + D ( B ) =D(A)+D(B) =D(A)+D(B)
在这里插入图片描述

你可能感兴趣的:(高考,其他,数学,期望,方差,线性性,线性关系,离散型随机变量)