密码 2022年6月11日-6月13日
不同变量之间的关系
自回归分布滞后模型
More generally, we could include the contemporaneous value of x t x_{t} xt
y t = ϕ 0 + ϕ 1 y t − 1 + ϕ 2 y t − 2 + ⋯ + ϕ p y t − p + γ 0 x t + γ 1 x t − 1 + γ 2 x t − 2 + ⋯ + γ q x t − q + ϵ y t or Φ ( L ) y t = ϕ 0 + Ψ ( L ) x t + ϵ y t Φ ( L ) = ( 1 − ϕ 1 L − ⋯ − ϕ p L p ) and Ψ ( L ) = γ 0 + γ 1 L + ⋯ + γ q L q \begin{gathered} y_{t}=\phi_{0}+\phi_{1} y_{t-1}+\phi_{2} y_{t-2}+\cdots+\phi_{p} y_{t-p} \\ +\gamma_{0} x_{t}+\gamma_{1} x_{t-1}+\gamma_{2} x_{t-2}+\cdots+\gamma_{q} x_{t-q}+\epsilon_{y t} \\ \text { or } \Phi(L) y_{t}=\phi_{0}+\Psi(L) x_{t}+\epsilon_{y t} \\ \Phi(L)=\left(1-\phi_{1} L-\cdots-\phi_{p} L^{p}\right) \text { and } \Psi(L)=\gamma_{0}+\gamma_{1} L+\cdots+\gamma_{q} L^{q} \end{gathered} yt=ϕ0+ϕ1yt−1+ϕ2yt−2+⋯+ϕpyt−p+γ0xt+γ1xt−1+γ2xt−2+⋯+γqxt−q+ϵyt or Φ(L)yt=ϕ0+Ψ(L)xt+ϵytΦ(L)=(1−ϕ1L−⋯−ϕpLp) and Ψ(L)=γ0+γ1L+⋯+γqLq
(1) { x t } \left\{x_{t}\right\} {xt} is exogenous that evolves independently of { y t } \left\{y_{t}\right\} {yt} :
x t = δ 0 + δ 1 x t − 1 + ⋯ + δ r x t − r + ϵ x t or D ( L ) x t = δ 0 + ϵ x t x_{t}=\delta_{0}+\delta_{1} x_{t-1}+\cdots+\delta_{r} x_{t-r}+\epsilon_{x t} \text { or } D(L) x_{t}=\delta_{0}+\epsilon_{x t} xt=δ0+δ1xt−1+⋯+δrxt−r+ϵxt or D(L)xt=δ0+ϵxt
where { ϵ x t } \left\{\epsilon_{x t}\right\} {ϵxt} is independent of { ϵ y t } \left\{\epsilon_{y t}\right\} {ϵyt}.
(2) { y t } \left\{y_{t}\right\} {yt} and { x t } \left\{x_{t}\right\} {xt} are stationary.
如果都是t-1期的变量可以用来forecast
包含t期的可以用来估计动态因果效应
使用Yule-Walker 的方式来计算CCF
ADL model
when X t X_t Xt is a white noise
Estimation: OLS or MLE
Model diagnostics :
the estimation residuals should behave as a white noise process and are uncorrelated with { x t , x t − 1 , ⋯ } \left\{x_{t}, x_{t-1}, \cdots\right\} {xt,xt−1,⋯}.
{ x t } \left\{x_{t}\right\} {xt} 内生怎么办呢?
We treat both y t y_{t} yt and x t x_{t} xt as endogenous variables:
y t − b 12 ( 0 ) x t = c 10 + b 11 ( 1 ) y t − 1 + b 12 ( 1 ) x t − 1 + ϵ y t x t − b 21 ( 0 ) y t = c 20 + b 21 ( 1 ) y t − 1 + b 22 ( 1 ) x t − 1 + ϵ x t [ ϵ y t ϵ x t ] ∼ i . i . d N ( 0 , [ σ y 2 0 0 σ x 2 ] ) \begin{aligned} &y_{t}-b_{12}^{(0)} x_{t}=c_{10}+b_{11}^{(1)} y_{t-1}+b_{12}^{(1)} x_{t-1}+\epsilon_{y t} \\ &x_{t}-b_{21}^{(0)} y_{t}=c_{20}+b_{21}^{(1)} y_{t-1}+b_{22}^{(1)} x_{t-1}+\epsilon_{x t} \end{aligned}\left[\begin{array}{c} \epsilon_{y t} \\ \epsilon_{x t} \end{array}\right] \stackrel{i . i . d}{\sim} N\left(0,\left[\begin{array}{cc} \sigma_{y}^{2} & 0 \\ 0 & \sigma_{x}^{2} \end{array}\right]\right) yt−b12(0)xt=c10+b11(1)yt−1+b12(1)xt−1+ϵytxt−b21(0)yt=c20+b21(1)yt−1+b22(1)xt−1+ϵxt[ϵytϵxt]∼i.i.dN(0,[σy200σx2])
Equivalently, we can write
[ 1 − b 12 ( 0 ) − b 21 ( 0 ) 1 ] [ y t x t ] = [ c 10 c 20 ] + [ b 11 ( 1 ) b 12 ( 1 ) b 21 ( 1 ) b 22 ( 1 ) ] [ y t − 1 x t − 1 ] + [ ϵ y t ϵ x t ] or B 0 z t = c + B 1 z t − 1 + ϵ t , ϵ t ∼ i . i . d N ( 0 , [ σ y 2 0 0 σ x 2 ] ) . \begin{aligned} &{\left[\begin{array}{lr} 1 & -b_{12}^{(0)} \\ -b_{21}^{(0)} & 1 \end{array}\right]\left[\begin{array}{l} y_{t} \\ x_{t} \end{array}\right]=\left[\begin{array}{l} c_{10} \\ c_{20} \end{array}\right]+\left[\begin{array}{cc} b_{11}^{(1)} & b_{12}^{(1)} \\ b_{21}^{(1)} & b_{22}^{(1)} \end{array}\right]\left[\begin{array}{l} y_{t-1} \\ x_{t-1} \end{array}\right]+\left[\begin{array}{c} \epsilon_{y t} \\ \epsilon_{x t} \end{array}\right]} \\ &\quad \text { or } B_{0} z_{t}=c+B_{1} z_{t-1}+\epsilon_{t}, \epsilon_{t} \stackrel{i . i . d}{\sim} N\left(0,\left[\begin{array}{cc} \sigma_{y}^{2} & 0 \\ 0 & \sigma_{x}^{2} \end{array}\right]\right) . \end{aligned} [1−b21(0)−b12(0)1][ytxt]=[c10c20]+[b11(1)b21(1)b12(1)b22(1)][yt−1xt−1]+[ϵytϵxt] or B0zt=c+B1zt−1+ϵt,ϵt∼i.i.dN(0,[σy200σx2]).
不能使用ols了,因为simultaneous equation bias.
We can then pre-multiply both sides by
伴随矩阵:主队掉,副取反
B 0 − 1 = 1 1 − b 12 ( 0 ) b 21 ( 0 ) [ 1 b 12 ( 0 ) b 21 ( 0 ) 1 ] to give z t = B 0 − 1 c + B 0 − 1 B 1 z t − 1 + B 0 − 1 ϵ t . \begin{aligned} B_{0}^{-1}=\frac{1}{1-b_{12}^{(0)} b_{21}^{(0)}}\left[\begin{array}{cc} 1 & b_{12}^{(0)} \\ b_{21}^{(0)} & 1 \end{array}\right] \text { to give } \\ z_{t}=B_{0}^{-1} c+B_{0}^{-1} B_{1} z_{t-1}+B_{0}^{-1} \epsilon_{t} . \end{aligned} B0−1=1−b12(0)b21(0)1[1b21(0)b12(0)1] to give zt=B0−1c+B0−1B1zt−1+B0−1ϵt.
a reduced-form model
Φ 0 ≡ B 0 − 1 c , Φ 1 ≡ B 0 − 1 B 1 , a t ≡ B 0 − 1 ϵ t , Σ ≡ B 0 − 1 [ σ y 2 0 0 σ x 2 ] B 0 − 1 ′ \Phi_{0} \equiv B_{0}^{-1} c, \Phi_{1} \equiv B_{0}^{-1} B_{1}, a_{t} \equiv B_{0}^{-1} \epsilon_{t}, \Sigma \equiv B_{0}^{-1}\left[\begin{array}{cc} \sigma_{y}^{2} & 0 \\ 0 & \sigma_{x}^{2} \end{array}\right] B_{0}^{-1 \prime} Φ0≡B0−1c,Φ1≡B0−1B1,at≡B0−1ϵt,Σ≡B0−1[σy200σx2]B0−1′
z t = Φ 0 + Φ 1 z t − 1 + a t , a t = [ a 1 t a 2 t ] ∼ i . i . d N ( 0 , Σ ) . z_{t}=\Phi_{0}+\Phi_{1} z_{t-1}+a_{t}, \quad a_{t}=\left[\begin{array}{l} a_{1 t} \\ a_{2 t} \end{array}\right] \stackrel{i . i . d}{\sim} N(0, \Sigma) . zt=Φ0+Φ1zt−1+at,at=[a1ta2t]∼i.i.dN(0,Σ).
The first moments and the second moments of z t z_{t} zt are time-invariant.
Both E ( z t ) = [ E ( y t ) E ( x t ) ] ≡ μ and var ( z t ) = [ var ( y t ) cov ( y t , x t ) cov ( x t , y t ) var ( x t ) ] ≡ Γ 0 = [ Γ 11 ( 0 ) Γ 12 ( 0 ) Γ 21 ( 0 ) Γ 22 ( 0 ) ] \begin{gathered} \text { Both } E\left(z_{t}\right)=\left[\begin{array}{c} E\left(y_{t}\right) \\ E\left(x_{t}\right) \end{array}\right] \equiv \mu \quad \text { and } \\ \operatorname{var}\left(z_{t}\right)=\left[\begin{array}{cc} \operatorname{var}\left(y_{t}\right) & \operatorname{cov}\left(y_{t}, x_{t}\right) \\ \operatorname{cov}\left(x_{t}, y_{t}\right) & \operatorname{var}\left(x_{t}\right) \end{array}\right] \equiv \Gamma_{0}=\left[\begin{array}{cc} \Gamma_{11}(0) & \Gamma_{12}(0) \\ \Gamma_{21}(0) & \Gamma_{22}(0) \end{array}\right] \end{gathered} Both E(zt)=[E(yt)E(xt)]≡μ and var(zt)=[var(yt)cov(xt,yt)cov(yt,xt)var(xt)]≡Γ0=[Γ11(0)Γ21(0)Γ12(0)Γ22(0)]
are time invariant.
Lag-k cross-covariance matrices of z t z_{t} zt :
Γ k = [ Γ 11 ( k ) Γ 12 ( k ) Γ 21 ( k ) Γ 22 ( k ) ] ≡ cov ( z t , z t − k ) = [ cov ( y t , y t − k ) cov ( y t , x t − k ) cov ( x t , y t − k ) cov ( x t , x t − k ) ] \begin{aligned} \Gamma_{k} &=\left[\begin{array}{ll} \Gamma_{11}(k) & \Gamma_{12}(k) \\ \Gamma_{21}(k) & \Gamma_{22}(k) \end{array}\right] \equiv \operatorname{cov}\left(z_{t}, z_{t-k}\right) \\ &=\left[\begin{array}{cc} \operatorname{cov}\left(y_{t}, y_{t-k}\right) & \operatorname{cov}\left(y_{t}, x_{t-k}\right) \\ \operatorname{cov}\left(x_{t}, y_{t-k}\right) & \operatorname{cov}\left(x_{t}, x_{t-k}\right) \end{array}\right] \end{aligned} Γk=[Γ11(k)Γ21(k)Γ12(k)Γ22(k)]≡cov(zt,zt−k)=[cov(yt,yt−k)cov(xt,yt−k)cov(yt,xt−k)cov(xt,xt−k)]
Let the diagonal matrix D be
D ≡ [ std ( y t ) 0 0 std ( x t ) ] = [ Γ 11 ( 0 ) 0 0 Γ 22 ( 0 ) ] D \equiv\left[\begin{array}{cc} \operatorname{std}\left(y_{t}\right) & 0 \\ 0 & \operatorname{std}\left(x_{t}\right) \end{array}\right]=\left[\begin{array}{cc} \sqrt{\Gamma_{11}(0)} & 0 \\ 0 & \sqrt{\Gamma_{22}(0)} \end{array}\right] D≡[std(yt)00std(xt)]=[Γ11(0)00Γ22(0)]
The concurrent cross-correlation matrix:
ρ 0 ≡ corr ( z t , z t ) = [ 1 corr ( y t , x t ) corr ( x t , y t ) 1 ] = D − 1 Γ 0 D − 1 \rho_{0} \equiv \operatorname{corr}\left(z_{t}, z_{t}\right)=\left[\begin{array}{cc} 1 & \operatorname{corr}\left(y_{t}, x_{t}\right) \\ \operatorname{corr}\left(x_{t}, y_{t}\right) & 1 \end{array}\right]=D^{-1} \Gamma_{0} D^{-1} ρ0≡corr(zt,zt)=[1corr(xt,yt)corr(yt,xt)1]=D−1Γ0D−1
Lag-k cross-correlation matrix (CCM) :
ρ k = [ ρ 11 ( k ) ρ 12 ( k ) ρ 21 ( k ) ρ 22 ( k ) ] ≡ corr ( z t , z t − k ) = [ corr ( y t , y t − k ) corr ( y t , x t − k ) corr ( x t , y t − k ) corr ( x t , x t − k ) ] = D − 1 Γ k D − 1 \begin{aligned} \rho_{k} &=\left[\begin{array}{ll} \rho_{11}(k) & \rho_{12}(k) \\ \rho_{21}(k) & \rho_{22}(k) \end{array}\right] \equiv \operatorname{corr}\left(z_{t}, z_{t-k}\right) \\ &=\left[\begin{array}{ll} \operatorname{corr}\left(y_{t}, y_{t-k}\right) & \operatorname{corr}\left(y_{t}, x_{t-k}\right) \\ \operatorname{corr}\left(x_{t}, y_{t-k}\right) & \operatorname{corr}\left(x_{t}, x_{t-k}\right) \end{array}\right]=D^{-1} \Gamma_{k} D^{-1} \end{aligned} ρk=[ρ11(k)ρ21(k)ρ12(k)ρ22(k)]≡corr(zt,zt−k)=[corr(yt,yt−k)corr(xt,yt−k)corr(yt,xt−k)corr(xt,xt−k)]=D−1ΓkD−1
Estimate CCM : Sample Cross-Correlation Matrices
把前面的公式里的值用估计量替代
Iteration from z t z_{t} zt back to z 0 z_{0} z0 yields
z t = ∑ j = 0 t − 1 Φ 1 j Φ 0 + Φ 1 t z 0 + ∑ j = 0 t − 1 Φ 1 j a t − j z_{t}=\sum_{j=0}^{t-1} \Phi_{1}^{j} \Phi_{0}+\Phi_{1}^{t} z_{0}+\sum_{j=0}^{t-1} \Phi_{1}^{j} a_{t-j} zt=j=0∑t−1Φ1jΦ0+Φ1tz0+j=0∑t−1Φ1jat−j
where ( Φ 1 ) 0 = I 2 \left(\Phi_{1}\right)^{0}=I_{2} (Φ1)0=I2 - the identity matrix. Continuing to iterate backward another n \mathrm{n} n periods, we obtain
z t = ∑ j = 0 t + n − 1 Φ 1 j Φ 0 + Φ 1 t + n z − n + ∑ j = 0 t + n − 1 Φ 1 j a t − j z_{t}=\sum_{j=0}^{t+n-1} \Phi_{1}^{j} \Phi_{0}+\Phi_{1}^{t+n} z_{-n}+\sum_{j=0}^{t+n-1} \Phi_{1}^{j} a_{t-j} zt=j=0∑t+n−1Φ1jΦ0+Φ1t+nz−n+j=0∑t+n−1Φ1jat−j
Stability requires that lim n → ∞ Φ 1 n = 0 \lim _{n \rightarrow \infty} \Phi_{1}^{n}=0 limn→∞Φ1n=0, where Φ 1 = [ ϕ 11 ϕ 12 ϕ 21 ϕ 22 ] \Phi_{1}=\left[\begin{array}{ll}\phi_{11} & \phi_{12} \\ \phi_{21} & \phi_{22}\end{array}\right] Φ1=[ϕ11ϕ21ϕ12ϕ22].
stable+发生很久或者一直在均衡方程可以得到stationary
z t = Φ 0 + Φ 1 z t − 1 + a t , a t = [ a 1 t a 2 t ] ∼ i . i . d N ( 0 , Σ ) . z t = ∑ j = 0 t − 1 Φ 1 j Φ 0 + Φ 1 t z 0 + ∑ j = 0 t − 1 Φ 1 j a t − j z_{t}=\Phi_{0}+\Phi_{1} z_{t-1}+a_{t}, \quad a_{t}=\left[\begin{array}{l} a_{1 t} \\ a_{2 t} \end{array}\right] \stackrel{i . i . d}{\sim} N(0, \Sigma) .\\ z_{t}=\sum_{j=0}^{t-1} \Phi_{1}^{j} \Phi_{0}+\Phi_{1}^{t} z_{0}+\sum_{j=0}^{t-1} \Phi_{1}^{j} a_{t-j} zt=Φ0+Φ1zt−1+at,at=[a1ta2t]∼i.i.dN(0,Σ).zt=j=0∑t−1Φ1jΦ0+Φ1tz0+j=0∑t−1Φ1jat−j
If z t z_{t} zt is stationary
estimate each equation separately by OLS or seemingly unrelated regression (SUR).
预测相当于向前迭代,求解相当于向后迭代
Consider the stationary two-variable VAR ( 1 ) \operatorname{VAR}(1) VAR(1) model
z t = Φ 0 + Φ 1 z t − 1 + a t . z_{t}=\Phi_{0}+\Phi_{1} z_{t-1}+a_{t} . zt=Φ0+Φ1zt−1+at.
In Eq.(12), let n → ∞ n \rightarrow \infty n→∞, then we can get the VMA ( ∞ ) \operatorname{VMA}(\infty) VMA(∞) representation
z t = μ + ∑ s = 0 ∞ Ψ s a t − s , z_{t}=\mu+\sum_{s=0}^{\infty} \Psi_{s} a_{t-s} \text {, } zt=μ+s=0∑∞Ψsat−s,
where Ψ s = Φ 1 s \Psi_{s}=\Phi_{1}^{s} Ψs=Φ1s and μ = ( I − Φ 1 ) − 1 Φ 0 \mu=\left(I-\Phi_{1}\right)^{-1} \Phi_{0} μ=(I−Φ1)−1Φ0
Eq(12)
z t = ∑ j = 0 t + n − 1 Φ 1 j Φ 0 + Φ 1 t + n z − n + ∑ j = 0 t + n − 1 Φ 1 j a t − j z_{t}=\sum_{j=0}^{t+n-1} \Phi_{1}^{j} \Phi_{0}+\Phi_{1}^{t+n} z_{-n}+\sum_{j=0}^{t+n-1} \Phi_{1}^{j} a_{t-j} zt=j=0∑t+n−1Φ1jΦ0+Φ1t+nz−n+j=0∑t+n−1Φ1jat−j
求脉冲响应函数:y,x对残差项shock的偏导数
Denote Ψ s = [ ψ 11 ( s ) ψ 12 ( s ) ψ 21 ( s ) ψ 22 ( s ) ] \Psi_{s}=\left[\begin{array}{ll}\psi_{11}(s) & \psi_{12}(s) \\ \psi_{21}(s) & \psi_{22}(s)\end{array}\right] Ψs=[ψ11(s)ψ21(s)ψ12(s)ψ22(s)]
the impulse response functions of the structural shocks.
If we could further identify a t = B 0 − 1 ϵ t a_{t}=B_{0}^{-1} \epsilon_{t} at=B0−1ϵt, where ϵ t \epsilon_{t} ϵt is defined in the structural VAR. Thus
z t = μ + ∑ s = 0 ∞ Φ 1 s B 0 − 1 ϵ t − s z_{t}=\mu+\sum_{s=0}^{\infty} \Phi_{1}^{s} B_{0}^{-1} \epsilon_{t-s} zt=μ+s=0∑∞Φ1sB0−1ϵt−s
Denote Π s = Φ 1 s B 0 − 1 = [ π 11 ( s ) π 12 ( s ) π 21 ( s ) π 22 ( s ) ] \Pi_{s}=\Phi_{1}^{s} B_{0}^{-1}=\left[\begin{array}{ll}\pi_{11}(s) & \pi_{12}(s) \\ \pi_{21}(s) & \pi_{22}(s)\end{array}\right] Πs=Φ1sB0−1=[π11(s)π21(s)π12(s)π22(s)]
π 11 ( s ) = ∂ y t ∂ ϵ y , t − s , π 12 ( s ) = ∂ y t ∂ ϵ x , t − s \pi_{11}(s)=\frac{\partial y_{t}}{\partial \epsilon_{y, t-s}}, \pi_{12}(s)=\frac{\partial y_{t}}{\partial \epsilon_{x, t-s}} π11(s)=∂ϵy,t−s∂yt,π12(s)=∂ϵx,t−s∂yt.
π 21 ( s ) = ∂ x t ∂ ϵ y , t − s , π 22 ( s ) = ∂ x t ∂ ϵ x , t − s \pi_{21}(s)=\frac{\partial x_{t}}{\partial \epsilon_{y, t-s}}, \pi_{22}(s)=\frac{\partial x_{t}}{\partial \epsilon_{x, t-s}} π21(s)=∂ϵy,t−s∂xt,π22(s)=∂ϵx,t−s∂xt.
预测均方误差分解
Consider j-step-ahead forecast using the VMA ( ∞ ) \operatorname{VMA}(\infty) VMA(∞) representation of the structural model Eq.(13) :
We could decompose the j-step-ahead forecast error variance into the proportions due to each structural shock:
The k-variable p p p-lag vector autoregressive model has the form
z t = Φ 0 + Φ 1 z t − 1 + ⋯ + Φ p z t − p + a t , p ≥ 1 z_{t}=\Phi_{0}+\Phi_{1} z_{t-1}+\cdots+\Phi_{p} z_{t-p}+a_{t}, \quad p \geq 1 zt=Φ0+Φ1zt−1+⋯+Φpzt−p+at,p≥1
where Φ 0 \Phi_{0} Φ0 is a k k k-dimensional vector, Φ j \Phi_{j} Φj are k × k k \times k k×k matrices, and { a t } \left\{a_{t}\right\} {at} is a sequence of serially uncorrelated random vectors with mean zero and covariance matrix Σ \Sigma Σ.
k 2 p k^{2} p k2p coefficients plus k k k intercept terms
using the lag operator
( I k − Φ 1 L − ⋯ − Φ p L p ) z t = Φ 0 + a t \left(I_{k}-\Phi_{1} L-\cdots-\Phi_{p} L^{p}\right) z_{t}=\Phi_{0}+a_{t} (Ik−Φ1L−⋯−ΦpLp)zt=Φ0+at
where I k I_{k} Ik is the k × k k \times k k×k identity matrix.
讨论:
在VAR模型中所包含的变量可以根据相关的经济或金融理论进行选择,从而有助于相互预测。
为了获取系统中的重要信息,必须避免过度参数化和自由度损失问题。
如果滞后长度太小,模型就会被错误地指定;如果它太大,模型的自由度就会丢失。
A I C ( p ) = log ( ∣ Σ ^ p ∣ ) + 2 ( k 2 p + k ) T B I C ( p ) = log ( ∣ Σ ^ p ∣ ) + ( k 2 p + k ) log ( T ) T \begin{aligned} &A I C(p)=\log \left(\left|\widehat{\Sigma}_{p}\right|\right)+\frac{2\left(k^{2} p+k\right)}{T} \\ &B I C(p)=\log \left(\left|\widehat{\Sigma}_{p}\right|\right)+\frac{\left(k^{2} p+k\right) \log (T)}{T} \end{aligned} AIC(p)=log(∣∣∣Σ p∣∣∣)+T2(k2p+k)BIC(p)=log(∣∣∣Σ p∣∣∣)+T(k2p+k)log(T)
One of the main uses of VAR models is forecasting.
The following intuitive notion of a variable’s forecasting ability is due to Granger (1969).
实际操作结果
In the bivariate model, testing H 0 : z 2 H_{0}: z_{2} H0:z2 does not Granger-cause z 1 z_{1} z1 reduces to a testing H 0 : ϕ 12 1 = ϕ 12 2 = ⋯ = ϕ 12 p = 0 H_{0}: \phi_{12}^{1}=\phi_{12}^{2}=\cdots=\phi_{12}^{p}=0 H0:ϕ121=ϕ122=⋯=ϕ12p=0 from the linear regression
z 1 t = ϕ 10 + ϕ 11 1 z 1 , t − 1 + ⋯ + ϕ 11 p z 1 , t − p + ϕ 12 1 z 2 , t − 1 + ⋯ + ϕ 12 p z 2 , t − p + ϵ 1 t \begin{aligned} z_{1 t} &=\phi_{10}+\phi_{11}^{1} z_{1, t-1}+\cdots+\phi_{11}^{p} z_{1, t-p} \\ &+\phi_{12}^{1} z_{2, t-1}+\cdots+\phi_{12}^{p} z_{2, t-p}+\epsilon_{1 t} \end{aligned} z1t=ϕ10+ϕ111z1,t−1+⋯+ϕ11pz1,t−p+ϕ121z2,t−1+⋯+ϕ12pz2,t−p+ϵ1t
The test statistic is a simple F-statistic or Wald statistic.
The block-exogeneity test is the multivariate generalization of the Granger causality test. For example, in a trivariate model with y t y_{t} yt, x t x_{t} xt and w t w_{t} wt :
The block-exogeneity tests could be done using Wald test or the likelihood ratio test.
The lag exclusion test carries out lag exclusion tests for each lag in the VAR. For example, in a bivariate VAR ( 5 ) \operatorname{VAR}(5) VAR(5) model with y t y_{t} yt and x t x_{t} xt :
The lag exclusion tests are done using Wald test in Eviews.
The Q k ( m ) Q_{k}(m) Qk(m) statistic can be applied to the residual series to check the assumption that there are no serial or cross correlations in the residuals.残差是否为白噪音,与其他序列是否相关
For a fitted V A R ( p ) V A R(p) VAR(p) model, Q k ( m ) Q_{k}(m) Qk(m) statistic of the residuals is asymptotically a χ 2 ( k 2 m − g ) \chi^{2}\left(k^{2} m-g\right) χ2(k2m−g), where g g g is the number of estimated parameters in the VAR coefficient matrices.
Multivariate Portmanteau Tests/Ljung-Box Statistics Q ( m ) Q(m) Q(m) :