灰度模型是一种强大的预测模型,是基于原始的数据进行累加计算求得一种规律在进行建模的模型,其强大在于将无序的原始序列可以转变为一种有序的生成指数序列,缺点在于它只适合于指数增长的预测,较为单一,GM(1,1)为一阶只含一个变量的微分方程模型。
已知原始样本数量为 n n n,这里首先定义可容覆盖区间 Θ \Theta Θ如下:
Θ = ( e − 2 n + 1 , e 2 n + 2 ) \Theta=( e^{-2\over n+1},e^{2\over n+2}) Θ=(en+1−2,en+22)
我们现在假定原始数据样本 x x x如下所示:
1 | 2 | 3 | 4 | 5 | ··· | n |
---|---|---|---|---|---|---|
x ( 1 ) x(1) x(1) | x ( 2 ) x(2) x(2) | x ( 3 ) x(3) x(3) | x ( 4 ) x(4) x(4) | x ( 5 ) x(5) x(5) | ··· | x ( n ) x(n) x(n) |
定义列序比 λ ( k ) \lambda(k) λ(k)如下:
λ ( k ) = x ( k − 1 ) x ( k ) ( k = 2 , 3 , 4 , ⋅ ⋅ ⋅ n ) \lambda(k)={x(k-1)\over x(k)}(k=2,3,4,···n) λ(k)=x(k)x(k−1)(k=2,3,4,⋅⋅⋅n)
若满足如下条件:
∀ k , λ ( k ) ∈ Θ ( k = 2 , 3 , 4 , ⋅ ⋅ ⋅ n ) \forall k, \lambda(k)\in \Theta (k=2,3,4,···n) ∀k,λ(k)∈Θ(k=2,3,4,⋅⋅⋅n)
则我们称 x ( 0 ) x(0) x(0)是可以作为GM(1,1)的数据而被灰度预测的。
若上述 x x x不满足条件,则可以通过线性平移使得它满足条件:
y = x + c y=x+c y=x+c
λ 1 ( k ) = y ( k − 1 ) y ( k ) ( k = 2 , 3 , 4 , ⋅ ⋅ ⋅ n ) \lambda_1(k)={y(k-1)\over y(k)}(k=2,3,4,···n) λ1(k)=y(k)y(k−1)(k=2,3,4,⋅⋅⋅n)
∀ k , λ 1 ( k ) ∈ Θ ( k = 2 , 3 , 4 , ⋅ ⋅ ⋅ n ) \forall k, \lambda_1(k)\in \Theta (k=2,3,4,···n) ∀k,λ1(k)∈Θ(k=2,3,4,⋅⋅⋅n)
为方便读者建模使用,这里直接给出预测值模型公式,如读者想了解详细的建模过程,请查阅笔者给出的文献,下面我们记预测值记为 x 0 ( k ) x_0(k) x0(k)
x c ( k + 1 ) = ( x ( 1 ) − b ^ a ^ ) e − a ^ k + b ^ a ^ ( k = 0 , 1 , 2 , 3 , 4 ⋅ ⋅ ⋅ ⋅ n − 1 ⋅ ⋅ ⋅ ) x_c(k+1)=(x(1)-{\hat{b}\over \hat{a}})e^{-\hat{a}k}+{\hat{b}\over \hat{a}}(k=0,1,2,3,4····n-1···) xc(k+1)=(x(1)−a^b^)e−a^k+a^b^(k=0,1,2,3,4⋅⋅⋅⋅n−1⋅⋅⋅)
建模结束还需要进行残差和级别差检验,定义残差 ρ \rho ρ为
ρ ( k ) = x ( k ) − x 0 ( k ) \rho(k)=x(k)-x_0(k) ρ(k)=x(k)−x0(k)
定义级别差 ξ \xi ξ为:
ξ ( k ) = 1 − 1 − 0.5 a 1 + 0.5 a λ ( k ) \xi(k)=1-{1-0.5a\over 1+0.5a}\lambda(k) ξ(k)=1−1+0.5a1−0.5aλ(k)
其中
x 0 ( k + 1 ) = x c ( k + 1 ) − x c ( k ) ( k = 1 , 2 , 3 , 4 ⋅ ⋅ ⋅ ⋅ n − 1 ⋅ ⋅ ⋅ ) x_0(k+1)=x_c(k+1)-x_c(k) (k=1,2,3,4····n-1···) x0(k+1)=xc(k+1)−xc(k)(k=1,2,3,4⋅⋅⋅⋅n−1⋅⋅⋅)
x 0 ( 1 ) = x ( 1 ) x_0(1)=x(1) x0(1)=x(1)
x 1 ( k ) = ∑ i = 1 k x ( i ) ( k = 1 , 2 , 3 , 4 ⋅ ⋅ ⋅ n ) x^1(k)=\sum_{i=1}^kx(i)(k=1,2,3,4···n) x1(k)=i=1∑kx(i)(k=1,2,3,4⋅⋅⋅n)
z 1 ( k ) = 0.5 x 1 ( k ) + 0.5 x 1 ( k − 1 ) ( k = 2 , 3 , 4 ⋅ ⋅ ⋅ n ) z^1(k)=0.5x^1(k)+0.5x^1(k-1)(k=2,3,4···n) z1(k)=0.5x1(k)+0.5x1(k−1)(k=2,3,4⋅⋅⋅n)
x ( k ) + a z 1 ( k ) = b ( k = 2 , 3 , 4 ⋅ ⋅ ⋅ n ) x(k)+az^1(k)=b(k=2,3,4···n) x(k)+az1(k)=b(k=2,3,4⋅⋅⋅n)
B = [ − z 1 ( 2 ) 1 − z 1 ( 3 ) 1 − z 1 ( 4 ) 1 ⋅ ⋅ ⋅ 1 − z 1 ( n ) 1 ] B=\begin{bmatrix} -z^1(2) & 1 \\ -z^1(3) &1 \\-z^1(4) &1\\··· &1\\-z^1(n) &1\end{bmatrix} B=⎣⎢⎢⎢⎢⎡−z1(2)−z1(3)−z1(4)⋅⋅⋅−z1(n)11111⎦⎥⎥⎥⎥⎤
Y = [ x ( 2 ) , x ( 3 ) , x ( 4 ) , x ( 5 ) ⋅ ⋅ ⋅ x ( n ) ] T Y=[x(2),x(3),x(4),x(5)···x(n)]^T Y=[x(2),x(3),x(4),x(5)⋅⋅⋅x(n)]T
[ a ^ , b ^ ] T = ( B T B ) − 1 B T Y [\hat{a},\hat{b}]^T=(B^TB)^{-1}B^TY [a^,b^]T=(BTB)−1BTY
若 ρ ( k ) < 0.2 \rho(k)<0.2 ρ(k)<0.2则认为达到一般要求,若 ρ ( k ) < 0.1 \rho(k)<0.1 ρ(k)<0.1,则认为达到较高要求。
若 ξ ( k ) < 0.2 \xi(k)<0.2 ξ(k)<0.2则认为达到一般要求,若 ξ ( k ) < 0.1 \xi(k)<0.1 ξ(k)<0.1,则认为达到较高要求。
现在假设有如下数据集:让我们用灰度模型就行预测:
假设2000-2004年某地每年小麦产量如下所示:
年份 | 2000 | 2001 | 2002 | 2003 | 2004 |
---|---|---|---|---|---|
数量 | 101.5 101.5 101.5 | 102 102 102 | 102.3 102.3 102.3 | 102.2 102.2 102.2 | 102.0 102.0 102.0 |
灰度模型预测过程如下:
首先列出原始数据集 x x x:
x = ( 101.5 , 102 , 102.3 , 102.2 , 102.0 ) x=(101.5,102,102.3,102.2,102.0) x=(101.5,102,102.3,102.2,102.0)
利用下列公式求出级比如下:
λ ( k ) = x ( k − 1 ) x ( k ) ( k = 2 , 3 , 4 , ⋅ ⋅ ⋅ n ) \lambda(k)={x(k-1)\over x(k)}(k=2,3,4,···n) λ(k)=x(k)x(k−1)(k=2,3,4,⋅⋅⋅n)
λ = ( 0.995098 , 0.997067 , 1.000978 , 1.001961 ) \lambda=(0.995098,0.997067,1.000978,1.001961) λ=(0.995098,0.997067,1.000978,1.001961)
而此时的 n n n取为5,这样 Θ \Theta Θ我们就可以确定下来了
Θ = ( 0.7165 , 1.3965 ) \Theta=(0.7165,1.3965) Θ=(0.7165,1.3965)
这样满足了 ∀ k , λ 1 ( k ) ∈ Θ ( k = 2 , 3 , 4 , ⋅ ⋅ ⋅ n ) \forall k, \lambda_1(k)\in \Theta (k=2,3,4,···n) ∀k,λ1(k)∈Θ(k=2,3,4,⋅⋅⋅n)
也说明了 x ( 0 ) x(0) x(0)是可以作为GM(1,1)的数据而应用的。
接下来我们需要求出 x 1 x^1 x1和 B B B,由上可知由于:
x 1 ( k ) = ∑ i = 1 k x ( i ) ( k = 1 , 2 , 3 , 4 ⋅ ⋅ ⋅ n ) x^1(k)=\sum_{i=1}^kx(i)(k=1,2,3,4···n) x1(k)=i=1∑kx(i)(k=1,2,3,4⋅⋅⋅n)
B = [ − z 1 ( 2 ) 1 − z 1 ( 3 ) 1 − z 1 ( 4 ) 1 ⋅ ⋅ ⋅ 1 − z 1 ( n ) 1 ] B=\begin{bmatrix} -z^1(2) & 1 \\ -z^1(3) &1 \\-z^1(4) &1\\··· &1\\-z^1(n) &1\end{bmatrix} B=⎣⎢⎢⎢⎢⎡−z1(2)−z1(3)−z1(4)⋅⋅⋅−z1(n)11111⎦⎥⎥⎥⎥⎤
故带入得到 x 1 x^1 x1和 B B B:
x 1 = ( 101.5 , 203.5 , 305.8 , 408 , 510 ) x^1=(101.5,203.5,305.8,408,510) x1=(101.5,203.5,305.8,408,510)
B = [ − 0.5 [ x 1 ( 1 ) + x 1 ( 2 ) ] 1 − 0.5 [ x 1 ( 2 ) + x 1 ( 3 ) ] 1 − 0.5 [ x 1 ( 3 ) + x 1 ( 4 ) ] 1 − 0.5 [ x 1 ( 4 ) + x 1 ( 5 ) ] 1 ] = [ − 152.5 1 − 254.65 1 − 356.9 1 − 459 1 ] B=\begin{bmatrix}-0.5[x^1(1)+x^1(2)]&1\\ -0.5[x^1(2)+x^1(3)]&1\\-0.5[x^1(3)+x^1(4)]&1\\-0.5[x^1(4)+x^1(5)]&1\end{bmatrix}=\begin{bmatrix}-152.5&1\\ -254.65&1\\ -356.9&1\\-459&1\\\end{bmatrix} B=⎣⎢⎢⎡−0.5[x1(1)+x1(2)]−0.5[x1(2)+x1(3)]−0.5[x1(3)+x1(4)]−0.5[x1(4)+x1(5)]1111⎦⎥⎥⎤=⎣⎢⎢⎡−152.5−254.65−356.9−4591111⎦⎥⎥⎤
Y = [ x ( 2 ) , x ( 3 ) , x ( 4 ) , x ( 5 ) ⋅ ⋅ ⋅ x ( n ) ] T = [ 101.5 102.3 102.2 102 ] Y=[x(2),x(3),x(4),x(5)···x(n)]^T=\begin{bmatrix}101.5 \\102.3\\102.2\\102\end{bmatrix} Y=[x(2),x(3),x(4),x(5)⋅⋅⋅x(n)]T=⎣⎢⎢⎡101.5102.3102.2102⎦⎥⎥⎤
那么:
[ a ^ , b ^ ] T = ( B T B ) − 1 B T Y = [ 0.0000978234 102.1549 ] [\hat{a},\hat{b}]^T=(B^TB)^{-1}B^TY=\begin{bmatrix}0.0000978234\\ 102.1549 \end{bmatrix} [a^,b^]T=(BTB)−1BTY=[0.0000978234102.1549]
最后代入到我们的预测回归的方程中去:
x c ( k + 1 ) = ( x ( 1 ) − b ^ a ^ ) e − a ^ k + b ^ a ^ = 1044177.400000786 e − 0.0000978234 k + 1044278.900000786 x_c(k+1)=(x(1)-{\hat{b}\over \hat{a}})e^{-\hat{a}k}+{\hat{b}\over \hat{a}}=1044177.400000786e^{-0.0000978234k}+1044278.900000786 xc(k+1)=(x(1)−a^b^)e−a^k+a^b^=1044177.400000786e−0.0000978234k+1044278.900000786
令 k k k=(1,2,3,4,5)分别与我们的原始值进行检测对比:-
x c = ( 101.5 , 203.64 , 305.77 , 407.89 , 510 ) x_c=(101.5,203.64,305.77,407.89,510) xc=(101.5,203.64,305.77,407.89,510)
x 0 ( k + 1 ) = x c ( k + 1 ) − x c ( k ) ( k = 1 , 2 , 3 , 4 ⋅ ⋅ ⋅ ⋅ n − 1 ⋅ ⋅ ⋅ ) x_0(k+1)=x_c(k+1)-x_c(k) (k=1,2,3,4····n-1···) x0(k+1)=xc(k+1)−xc(k)(k=1,2,3,4⋅⋅⋅⋅n−1⋅⋅⋅)
x 0 ( 1 ) = x ( 1 ) x_0(1)=x(1) x0(1)=x(1)
故 x 0 x_0 x0为:
x 0 = ( 102.0000 , 102.14 , 102.13 , 102.12 , 102.11 ) x_0=(102.0000 ,102.14,102.13 ,102.12 ,102.11) x0=(102.0000,102.14,102.13,102.12,102.11)
下面进行残差和级比偏差检验决定模型是否符合:
ρ ( k ) = x ( k ) − x 0 ( k ) \rho(k)=x(k)-x_0(k) ρ(k)=x(k)−x0(k)
ξ ( k ) = 1 − 1 − 0.5 a 1 + 0.5 a λ ( k ) \xi(k)=1-{1-0.5a\over 1+0.5a}\lambda(k) ξ(k)=1−1+0.5a1−0.5aλ(k)
结果为:
原始数据 | 预测数据 | 残差 | 级比偏差 |
---|---|---|---|
101.5 | 101.50 | 0 | |
102 | 102.14 | -0.14 | 0.0050 |
102.3 | 102.13 | 0.17 | 0.0030 |
102.2 | 102.12 | 0.08 | -0.0009 |
102.0 | 102.11 | -0.11 | -0.0019 |
可见精度较高,对应的MATLAB程序如下所示,如想替换原始数据可将x替换掉即可实现,但是要注意,一定要事先确认是否可灰度预测!:
x=[101.5,102,102.3,102.2,102.0];
[p,q]=size(x);
sum1=1;
for k=2:1:q
lamuda(sum1)=x(k-1)/x(k);
sum1=sum1+1
end
thrate1=exp(-2/(q+1));
thrate2=exp(2/(q+1));
for ll=1:1:q
sum2=0
for u=1:1:ll
sum2=sum2+x(u)
end
x1(ll)=sum2;
end
for j=1:1:q-1
B(j,1)=-0.5*(x1(j)+x1(j+1));
B(j,2)=1;
end
Y=(x(2:1:q))';
result=(inv(B'*B))*B'*Y;
a1=result(1);b1=result(2);
canshu1=(x(1)-b1/a1);
canshu2=b1/a1;
yucesum(1)=x(1);
for g=1:1:q-1
yucesum(g+1)=canshu1*(exp(-a1*g))+canshu2;
end
x0(1)=x(1);
for r=1:1:q-1
x0(r+1)=yucesum(r+1)-yucesum(r)
end
for o=1:1:q
cancha(o)=(x(o)-x0(o))
end
for h=1:1:q-1
jibipiancha(h+1)=1-((1-0.5*a1)/(1+0.5*a1))*lamuda(h)
end
参考文献:数学建模算法与应用/国防工业出版社(司守奎等)