Touch into the Vector Auto Regression Model

文章目录

    • Motivation
    • VAR in structural and reduced form
      • Identification logic
      • Identification Techniques
        • Necessity of adding restrictions for identification
        • Sims-Bernanke procedure
        • Blanchard-Quach procedure
          • Example
      • Impulse response
        • Logic of impulse response
        • From VAR to VMA
        • From VMA to impulse response function
        • Confidence intervals of impulse response coefficients
      • Variance decomposition
    • Reference

Motivation

In my last blog, I recognized the potential of the information-based variance decomposition method introduced by Brogaard et al. (2022, RFS) and showed interests of applying this method into my own research.

As replicating Brogaard et al. (2022, RFS) requires some manipulations on the VAR estimation outputs, I took some time to figure out the theory and estimation of the reduced-form VAR coefficients, Impulse response functions (IRFs), structural IRFS, orthogonalized IRFs, and variance decomposition.

I summarized what’ve got in three blogs. In the first blog, I show the basic logics of VAR model with the simplest 2-variable, 1-lag VAR model. In the second blog, I show how to use var and svar commands to conveniently estimate the VAR model in Stata. In the third blog, I will dig deeper, show the theoretical definitions and calculation formula of major outputs in VAR model, and manually calculate them in Stata to thourougly uncover the black box of the VAR estimation.

This blog is the first one among my VAR blog series. In this blog, I summarized the logics of VAR, Impulse Reaction, and Variance Decomposition with the simplest 2-variable, 1-lag VAR model. The following summary is based on PP.264-PP.311 of Enders (2004).

VAR in structural and reduced form

Consider a bivariate structural VAR system composed by two I ( 0 ) I(0) I(0) series { y t } \{y_{t}\} {yt} and { z t } \{z_{t}\} {zt}

y t = b 10 − b 12 z t + γ 11 y t − 1 + γ 12 z t − 1 + ϵ y t z t = b 20 − b 21 y t + γ 21 y t − 1 + γ 22 z t − 1 + ϵ z t (1) \begin{aligned} y_t&=b_{10}-b_{12} z_t+\gamma_{11} y_{t-1}+\gamma_{12} z_{t-1}+\epsilon_{y t} \\ z_t&=b_{20}-b_{21} y_t+\gamma_{21} y_{t-1}+\gamma_{22} z_{t-1}+\epsilon_{z t} \end{aligned} \tag{1} ytzt=b10b12zt+γ11yt1+γ12zt1+ϵyt=b20b21yt+γ21yt1+γ22zt1+ϵzt(1)

where { ϵ y t } \{\epsilon_{y t}\} {ϵyt} and { ϵ z t } \{\epsilon_{z t}\} {ϵzt} are uncorrelated white-noise disturbances.

This is not a VAR in reduced form since y t y_{t} yt and z t z_{t} zt has a contemporaneous effect on each other.

Represent Equation (1) in a compact form

B x t = Γ 0 + Γ 1 x t − 1 + ϵ t (2) B x_t=\Gamma_0+\Gamma_1 x_{t-1}+\epsilon_t \tag{2} Bxt=Γ0+Γ1xt1+ϵt(2)

where

B = [ 1 b 12 b 21 1 ] , x t = [ y t z 1 ] , Γ 0 = [ b 10 b 20 ] Γ 1 = [ γ 11 γ 12 γ 21 γ 22 ] , ϵ t = [ ϵ y t ϵ z t ] \begin{aligned} &B=\left[\begin{array}{cc} 1 & b_{12} \\ b_{21} & 1 \end{array}\right], \quad x_t=\left[\begin{array}{l} y_t \\ z_1 \end{array}\right], \quad \Gamma_0=\left[\begin{array}{l} b_{10} \\ b_{20} \end{array}\right] \\ &\Gamma_1=\left[\begin{array}{ll} \gamma_{11} & \gamma_{12} \\ \gamma_{21} & \gamma_{22} \end{array}\right], \quad \epsilon_t=\left[\begin{array}{l} \epsilon_{y t} \\ \epsilon_{z t} \end{array}\right] \\ & \end{aligned} B=[1b21b121],xt=[ytz1],Γ0=[b10b20]Γ1=[γ11γ21γ12γ22],ϵt=[ϵytϵzt]

Pre-multiply B − 1 B^{-1} B1 to obtain VAR model in standard (reduced) form

x t = A 0 + A 1 x r − 1 + e t (3) x_t=A_0+A_1 x_{r-1}+e_t \tag{3} xt=A0+A1xr1+et(3)

where

A 0 = B − 1 Γ 0 A 1 = B − 1 Γ 1 e 1 = B − 1 ϵ 1 \begin{aligned}&A_0=B^{-1} \Gamma_0 \\ &A_1=B^{-1} \Gamma_1 \\ &e_1=B^{-1} \epsilon_1\end{aligned} A0=B1Γ0A1=B1Γ1e1=B1ϵ1

in this special case

B − 1 = 1 1 − b 12 b 21 [ 1 − b 12 − b 21 1 ] B^{-1}=\frac{1}{1-b_{12} b_{21}}\left[\begin{array}{cc} 1 & -b_{12} \\ -b_{21} & 1 \end{array}\right] B1=1b12b211[1b21b121]

and thus

[ e y t e z t ] = B − 1 [ ϵ y t ϵ z t ] = [ ( ϵ y t − b 12 ϵ 2 t ) / ( 1 − b 12 b 21 ) ( ϵ z t − b 21 ϵ y t ) / ( 1 − b 12 b 21 ) ] (4) \left[\begin{array}{l} e_{y t} \\ e_{z t} \end{array}\right]=B^{-1}\left[\begin{array}{l} \epsilon_{y t} \\ \epsilon_{z t} \end{array}\right]=\left[\begin{array}{l} \left(\epsilon_{y t}-b_{12} \epsilon_{2 t}\right) /\left(1-b_{12} b_{21}\right) \\ \left(\epsilon_{z t}-b_{21} \epsilon_{y t}\right) /\left(1-b_{12} b_{21}\right) \end{array}\right]\tag{4} [eytezt]=B1[ϵytϵzt]=[(ϵytb12ϵ2t)/(1b12b21)(ϵztb21ϵyt)/(1b12b21)](4)

Identification logic

  • as in the reduced form, the right hand side only contains predetermined variables and the error terms are assumed to be serially uncorrelated with constant variance, each equation in the reduced-form system can be estimated using OLS.

    • the latter feature is obtained because the disturbance terms are assumed to be uncorrelated white noise series, which results

      E e 1 t e 1 t − i = E [ ( ϵ y t − b 12 ϵ 2 t ) ( ϵ y t − i − b 12 ϵ z t − i ) ] ( 1 − b 12 b 21 ) 2 = 0 , ∀ i ≠ 0 E e_{1t} e_{1 t-i}=E\frac{\left[\left(\epsilon_{y t}-b_{12} \epsilon_{2 t}\right)\left(\epsilon_{y t-i}-b_{12} \epsilon_{z t-i}\right)\right]}{\left(1-b_{12} b_{21}\right)^2}=0, \forall i \neq 0 Ee1te1ti=E(1b12b21)2[(ϵytb12ϵ2t)(ϵytib12ϵzti)]=0,i=0

    • the OLS estimation is proved to be consistent and asymptotically efficient in reduced form

  • however, we care about how does the innovations cause contemporaneous changes in focal variables

  • thus, the typical logic of VAR estimation is to employ OLS to estimate the reduced-form VAR first, and then back out the coefficients in the structural VAR using the parameters estimated in the reduced-form VAR

Identification Techniques

Necessity of adding restrictions for identification

Consider a first-order VAR model with n n n variables (the identification procedure is invariant to lag length).

[ 1 b 12 b 13 … b 1 n b 21 1 b 23 … b 2 n ⋅ . ⋅ . ⋅ b n 1 b n 2 b n 3 … 1 ] [ x 11 x 2 t ⋯ x n t ] = [ b 10 b 20 … b n 0 ] + [ γ 11 γ 12 γ 13 … γ 1 n γ 21 γ 22 γ 23 … γ 2 n ⋅ ⋅ ⋅ ⋅ ⋅ γ n 1 γ n 2 γ n 3 … γ n n ] [ x 1 t − 1 x 2 t − 1 … x n t − 1 ] + [ ϵ 1 t ϵ 2 t … ϵ n t ] (5) \left[\begin{array}{ccccc} 1 & b_{12} & b_{13} & \ldots & b_{1 n} \\ b_{21} & 1 & b_{23} & \ldots & b_{2 n} \\ \cdot & . & \cdot & . & \cdot \\ b_{n 1} & b_{n 2} & b_{n 3} & \ldots & 1 \end{array}\right]\left[\begin{array}{l} x_{11} \\ x_{2 t} \\ \cdots \\ x_{n t} \end{array}\right]=\left[\begin{array}{c} b_{10} \\ b_{20} \\ \ldots \\ b_{n 0} \end{array}\right]+\left[\begin{array}{ccccc} \gamma_{11} & \gamma_{12} & \gamma_{13} & \ldots & \gamma_{1 n} \\ \gamma_{21} & \gamma_{22} & \gamma_{23} & \ldots & \gamma_{2 n} \\ \cdot & \cdot & \cdot & \cdot & \cdot \\ \gamma_{n 1} & \gamma_{n 2} & \gamma_{n 3} & \ldots & \gamma_{n n} \end{array}\right]\left[\begin{array}{c} x_{1 t-1} \\ x_{2 t-1} \\ \ldots \\ x_{n t-1} \end{array}\right]+\left[\begin{array}{c} \epsilon_{1 t} \\ \epsilon_{2 t} \\ \ldots \\ \epsilon_{n t} \end{array}\right] \tag{5} 1b21bn1b121.bn2b13b23bn3.b1nb2n1 x11x2txnt = b10b20bn0 + γ11γ21γn1γ12γ22γn2γ13γ23γn3γ1nγ2nγnn x1t1x2t1xnt1 + ϵ1tϵ2tϵnt (5)
or in compact form

B x t = Γ 0 + Γ 1 x t − 1 + ϵ t (6) B x_t=\Gamma_0+\Gamma_1 x_{t-1}+\epsilon_t \tag{6} Bxt=Γ0+Γ1xt1+ϵt(6)

pre-multiply (5) with B − 1 B^{-1} B1 and get the reduced form

x t = B − 1 Γ 0 + B − 1 Γ 1 x t − 1 + B − 1 ϵ t (7) x_t=B^{-1} \Gamma_0+B^{-1} \Gamma_1 x_{t-1}+B^{-1} \epsilon_t \tag{7} xt=B1Γ0+B1Γ1xt1+B1ϵt(7)

in practice, we use OLS to estimate each regression in system (6) and get the variance-covariance matrix Σ \Sigma Σ

Σ = [ σ 1 2 σ 12 … σ 1 n σ 21 σ 2 2 … σ 2 n ⋅ ⋅ . . σ n 1 σ n 2 … σ n 2 ] \Sigma=\left[\begin{array}{cccc}\sigma_1^2 & \sigma_{12} & \ldots & \sigma_{1 n} \\ \sigma_{21} & \sigma_2^2 & \ldots & \sigma_{2 n} \\ \cdot & \cdot & . & . \\ \sigma_{n 1} & \sigma_{n 2} & \ldots & \sigma_n^2\end{array}\right] Σ= σ12σ21σn1σ12σ22σn2.σ1nσ2n.σn2

since Σ \Sigma Σ is symmetric, we can only get ( n 2 + n ) / 2 (n^2+n)/2 (n2+n)/2 distinct equations for identification.

however, the identification of B B B needs n 2 n^2 n2 conditions.

Thus, we need to impose n 2 − ( n 2 + n ) / 2 = ( n 2 − n ) / 2 n^2-(n^2+n)/2=(n^2-n)/2 n2(n2+n)/2=(n2n)/2 restrictions to matrix B B B to exactly identify the structural model from an estimation of the reduce-form VAR.

The way of adding restrictions can differ with economic contexts, but there are mainly two prevalent procedures in use.

Sims-Bernanke procedure

  • this procedure is represented by Sims (1986) and Bernanke (1986).

  • in this procedure, the scholars force all elements above the principal diagonal of B − 1 B^{-1} B1 to be 0, which is also called Cholesky decomposition

    b 12 = b 13 = b 14 = ⋯ = b 1 n = 0 b 23 = b 24 = ⋯ = b 2 n = 0 b 34 = ⋯ = b 3 n = 0 ⋯ b n − 1 n = 0 \begin{gathered} b_{12}=b_{13}=b_{14}=\cdots=b_{1 n}=0 \\ b_{23}=b_{24}=\cdots=b_{2 n}=0 \\ b_{34}=\cdots=b_{3 n}=0 \\ \cdots \\ b_{n-1 n}=0 \end{gathered} b12=b13=b14==b1n=0b23=b24==b2n=0b34==b3n=0bn1n=0

  • by doing this, there are ( n 2 − n ) / 2 (n^2-n)/2 (n2n)/2 restrictions manually imposed to the matrix B B B, which facilitates the exact identification of B B B

Blanchard-Quach procedure

  • this procedure is represented by Blanchard and Quach (1989), which reconsidered the Beveridge and Nelson (1981) decomposition of real GNP into its temporary and permanent components
    • an especially useful feature of the technique is that it provides a unique decomposition of an economic time series into its temporary and permanent components
  • differences from Sims-Bernanke procedure
    • at least one variables to be nonstationary since I ( 0 ) I(0) I(0) do not have a permanent component
    • do not directly associate the { ε 1 t } \{\varepsilon_{1t}\} {ε1t} and { ε 2 t } \{\varepsilon_{2t}\} {ε2t} shocks with the { y t } \{y_t\} {yt} and { z t } \{z_t\} {zt} variables
  • the key to decomposing the { y t } \{y_t\} {yt} sequence (or other non-stationary sequences in the VAR system) into its permanent and stationary components is to assume that at least one of the shocks has a temporary effect on the { y t } \{y_t\} {yt} sequence, which allows the identification of the structural VAR
Example
  • to illustrate the idea better, consider a bivariate VAR system with { y t } \{y_t\} {yt} being a I ( 1 ) I(1) I(1) series. write it into a VMA form as follows.

    Δ y t = ∑ k = 0 ∞ c 11 ( k ) ϵ 1 t − k + ∑ k = 0 ∞ c 12 ( k ) ϵ 2 t − k z t = ∑ k = 0 ∞ c 21 ( k ) ϵ 1 t − k + ∑ k = 0 ∞ c 22 ( k ) ϵ 2 t − k \begin{aligned} \Delta y_t &=\sum_{k=0}^{\infty} c_{11}(k) \epsilon_{1 t-k}+\sum_{k=0}^{\infty} c_{12}(k) \epsilon_{2 t-k} \\ z_t &=\sum_{k=0}^{\infty} c_{21}(k) \epsilon_{1 t-k}+\sum_{k=0}^{\infty} c_{22}(k) \epsilon_{2 t-k} \end{aligned} Δytzt=k=0c11(k)ϵ1tk+k=0c12(k)ϵ2tk=k=0c21(k)ϵ1tk+k=0c22(k)ϵ2tk

  • the key assumption of Blanchard-Quach procedure is that the cumulated effect of the shock { ε 1 t } \{\varepsilon_{1t}\} {ε1t} on the Δ y t \Delta y_t Δyt sequence must be equal to zero, for any possible realization of the { ε 1 t } \{\varepsilon_{1t}\} {ε1t} sequence

    ∑ k = 0 ∞ c 11 ( k ) ϵ 1 t − k = 0 \sum_{k=0}^{\infty} c_{11}(k) \epsilon_{1 t-k}=0 k=0c11(k)ϵ1tk=0

  • this restrictions combined with the three distinct variance-covariance parameters v a r ( e 1 ) , v a r ( e 2 ) , c o v ( e 1 , e 2 ) var(e_1), var(e_2), cov(e_1,e_2) var(e1),var(e2),cov(e1,e2) estimated from the reduced-form VAR can achieve the exact identification of the 2 × 2 2\times2 2×2 matrix B B B in this bivariate VAR system

Impulse response

Logic of impulse response

  • the idea of impulse response is to trace the effects of a one-unit shock in ϵ y t \epsilon_{y t} ϵyt and ϵ z t \epsilon_{z t} ϵzt on the time paths of the { y t } \{y_{t}\} {yt} and { z t } \{z_{t}\} {zt} sequences
  • to achieve this goal, it would be more convenient to represent { y t } \{y_{t}\} {yt} and { z t } \{z_{t}\} {zt} sequences using the { ϵ y t } \{\epsilon_{y t}\} {ϵyt} and { ϵ z t } \{\epsilon_{z t}\} {ϵzt} sequences, which means transferring VAR to a VMA model
  • to illustrate the intuition better, the derivation of the impulse response function will still be based on the bivariate VAR system

From VAR to VMA

  • start from the reduced-form VAR represented by equation (3)

    x t = A 0 + A 1 x r − 1 + e t x_t=A_0+A_1 x_{r-1}+e_t xt=A0+A1xr1+et

  • iterate the above equation to obtain

    x t = A 0 + A 1 ( A 0 + A 1 x t − 2 + e t − 1 ) + e t = ( I + A 1 ) A 0 + A 1 2 x t − 2 + A 1 e t − 1 + e t x_t =A_0+A_1\left(A_0+A_1 x_{t-2}+e_{t-1}\right)+e_t =\left(I+A_1\right) A_0+A_1^2 x_{t-2}+A_1 e_{t-1}+e_t xt=A0+A1(A0+A1xt2+et1)+et=(I+A1)A0+A12xt2+A1et1+et

  • after n n n iterations

    x t = ( I + A 1 + ⋯ + A 1 n ) A 0 + ∑ i = 0 n A 1 i e t − i + A 1 n + 1 x t − n − 1 x_t=\left(I+A_1+\cdots+A_1^n\right) A_0+\sum_{i=0}^n A_1^i e_{t-i}+A_1^{n+1} x_{t-n-1} xt=(I+A1++A1n)A0+i=0nA1ieti+A1n+1xtn1

  • if x t x_t xt should converge, then the term A n A^n An needs to vanish as n n n approaches infinity

  • assume the stability condition is met, we can write the VAR model in a VMA form

    x t = μ + ∑ i = 0 ∞ A 1 i e t − i (8) x_t=\mu+\sum_{i=0}^{\infty} A_1^i e_{t-i} \tag{8} xt=μ+i=0A1ieti(8)

    where μ = [ y ˉ z ˉ ] ′ \mu=\left[\begin{array}{ll}\bar{y} & \bar{z}\end{array}\right]^{\prime} μ=[yˉzˉ]

    and

    y ˉ = [ a 10 ( 1 − a 22 ) + a 12 a 20 ] / Δ , z ˉ = [ a 20 ( 1 − a 11 ) + a 21 a 10 ] / Δ Δ = ( 1 − a 11 ) ( 1 − a 22 ) − a 12 a 21 \begin{aligned}&\bar{y}=\left[a_{10}\left(1-a_{22}\right)+a_{12} a_{20}\right] / \Delta, \quad \bar{z}=\left[a_{20}\left(1-a_{11}\right)+a_{21} a_{10}\right] / \Delta \\ &\Delta=\left(1-a_{11}\right)\left(1-a_{22}\right)-a_{12} a_{21}\end{aligned} yˉ=[a10(1a22)+a12a20],zˉ=[a20(1a11)+a21a10]Δ=(1a11)(1a22)a12a21

    note that μ \mu μ can be calculated by applying the following formula

    I + A 1 + . . . + A 1 n = [ I − A 1 ] − 1 I+A_1+...+A_1^n = [I-A_1]^{-1} I+A1+...+A1n=[IA1]1

From VMA to impulse response function

  • start from equation (8), which is the VMA representation of VAR model

    x t = μ + ∑ i = 0 ∞ A 1 i e t − i x_t=\mu+\sum_{i=0}^{\infty} A_1^i e_{t-i} xt=μ+i=0A1ieti

  • write in a loose form

    [ y t z t ] = [ y ˉ z ˉ ] + ∑ i = 0 ∞ [ a 11 a 12 a 21 a 22 ] i [ e 1 t − i e 2 t − i ] (9) \left[\begin{array}{l} y_t \\ z_t \end{array}\right]=\left[\begin{array}{l} \bar{y} \\ \bar{z} \end{array}\right]+\sum_{i=0}^{\infty}\left[\begin{array}{ll} a_{11} & a_{12} \\ a_{21} & a_{22} \end{array}\right]^i\left[\begin{array}{l} e_{1 t-i} \\ e_{2 t-i} \end{array}\right] \tag{9} [ytzt]=[yˉzˉ]+i=0[a11a21a12a22]i[e1tie2ti](9)

  • recall from equation (4) the relationship between reduced-form error term and the innovations

    [ e 1 t e 2 t ] = B − 1 [ ϵ y t ϵ z t ] = 1 1 − b 12 b 21 [ 1 − b 12 − b 21 1 ] [ ϵ y t ϵ z t ] (10) \left[\begin{array}{l} e_{1 t} \\ e_{2 t} \end{array}\right]=B^{-1}\left[\begin{array}{l} \epsilon_{y t} \\ \epsilon_{z t} \end{array}\right]=\frac{1}{1-b_{12} b_{21}}\left[\begin{array}{cc} 1 & -b_{12} \\ -b_{21} & 1 \end{array}\right]\left[\begin{array}{c} \epsilon_{y t} \\ \epsilon_{z t} \end{array}\right] \tag{10} [e1te2t]=B1[ϵytϵzt]=1b12b211[1b21b121][ϵytϵzt](10)

  • the VMA representation of VAR model can be rewritten in terms of { ϵ y t } \{\epsilon_{y t}\} {ϵyt} and { ϵ z t } \{\epsilon_{z t}\} {ϵzt} sequences by plugging (10) into (9), which is also called the impulse response functions

  • use ϕ i j ( k ) \phi_{ij}(k) ϕij(k) to represent the impulse response coefficients

    [ y t z t ] = [ y ˉ z ˉ ] + ∑ i = 0 ∞ [ ϕ 11 ( i ) ϕ 12 ( i ) ϕ 21 ( i ) ϕ 22 ( i ) ] [ ϵ y t − i ϵ z t − i ] \left[\begin{array}{l} y_t \\ z_t \end{array}\right]=\left[\begin{array}{l} \bar{y} \\ \bar{z} \end{array}\right]+\sum_{i=0}^{\infty}\left[\begin{array}{ll} \phi_{11}(i) & \phi_{12}(i) \\ \phi_{21}(i) & \phi_{22}(i) \end{array}\right]\left[\begin{array}{l} \epsilon_{y t-i} \\ \epsilon_{z t-i} \end{array}\right] [ytzt]=[yˉzˉ]+i=0[ϕ11(i)ϕ21(i)ϕ12(i)ϕ22(i)][ϵytiϵzti]

  • in a compact format

    x t = μ + ∑ i = 0 ∞ ϕ i ϵ t − i (11) x_t=\mu+\sum_{i=0}^{\infty} \phi_i \epsilon_{t-i} \tag{11} xt=μ+i=0ϕiϵti(11)

  • the accumulated effects of unit impulses in ϵ y t \epsilon_{y t} ϵyt and ϵ z t \epsilon_{z t} ϵzt can be obtained by appropriate summation of the coefficients of the impulse response functions

    • for example, after n n n periods, the cumulated sum of effects of ϵ z t \epsilon_{zt} ϵzt on the { y t } \{y_t\} {yt} sequence is

      ∑ i = 0 n ϕ 12 ( i ) \sum_{i=0}^n \phi_{12}(i) i=0nϕ12(i)

Confidence intervals of impulse response coefficients

  • draw T T T, which is the sample size, random numbers so as to represent { ϵ } \{\epsilon\} {ϵ} sequence and then use it combined with the naive estimation of reduced-form VAR to construct { x ^ } \{\hat{x}\} {x^} series, then estimate the impulse response function
  • repeat the above procedure for 1000 times or more and use the realized impulse response coefficients to get bootstrap confidence intervals

Variance decomposition

  • start from equation (9), which is the impulse response equations

    x t = μ + ∑ i = 0 ∞ ϕ i ϵ t − i x_t=\mu+\sum_{i=0}^{\infty} \phi_i \epsilon_{t-i} xt=μ+i=0ϕiϵti

  • suppose now we are forecast the n n n periods after t t t

    x t + n = μ + ∑ i = 0 ∞ ϕ i ϵ t + n − i x_{t+n}=\mu+\sum_{i=0}^{\infty} \phi_i \epsilon_{t+n-i} xt+n=μ+i=0ϕiϵt+ni

  • as both { ϵ y t } \{\epsilon_{y t}\} {ϵyt} and { ϵ z t } \{\epsilon_{z t}\} {ϵzt} are white-noise disturbances, the n n n-period forecast error is

    x t + n − E t x t + n = ∑ i = 0 n − 1 ϕ i ϵ t + n − i (12) x_{t+n}-E_t x_{t+n}=\sum_{i=0}^{n-1} \phi_i \epsilon_{t+n-i} \tag{12} xt+nEtxt+n=i=0n1ϕiϵt+ni(12)

  • take the { y t } \{y_t\} {yt} sequence as an example, the n n n-period forecast error is

    y t + n − E t y t + n = ϕ 11 ( 0 ) ϵ y t + n + ϕ 11 ( 1 ) ϵ y t + n − 1 + ⋯ + ϕ 11 ( n − 1 ) ϵ y t + 1 + ϕ 12 ( 0 ) ϵ z t + n + ϕ 12 ( 1 ) ϵ z t + n − 1 + ⋯ + ϕ 12 ( n − 1 ) ϵ z t + 1 \begin{gathered} y_{t+n}-E_t y_{t+n}=\phi_{11}(0) \epsilon_{y t+n}+\phi_{11}(1) \epsilon_{y t+n-1}+\cdots+\phi_{11}(n-1) \epsilon_{y t+1} \\ +\phi_{12}(0) \epsilon_{z t+n}+\phi_{12}(1) \epsilon_{z t+n-1}+\cdots+\phi_{12}(n-1) \epsilon_{z t+1} \end{gathered} yt+nEtyt+n=ϕ11(0)ϵyt+n+ϕ11(1)ϵyt+n1++ϕ11(n1)ϵyt+1+ϕ12(0)ϵzt+n+ϕ12(1)ϵzt+n1++ϕ12(n1)ϵzt+1

  • denote the n n n-step-ahead forecast error variance of y t + n y_{t+n} yt+n as σ y ( n ) 2 \sigma_y(n)^2 σy(n)2

    σ y ( n ) 2 = σ y 2 [ ϕ 11 ( 0 ) 2 + ϕ 11 ( 1 ) 2 + ⋯ + ϕ 11 ( n − 1 ) 2 ] + σ z 2 [ ϕ 12 ( 0 ) 2 + ϕ 12 ( 1 ) 2 + ⋯ + ϕ 12 ( n − 1 ) 2 ] \sigma_y(n)^2=\sigma_y^2\left[\phi_{11}(0)^2+\phi_{11}(1)^2+\cdots+\phi_{11}(n-1)^2\right]+\sigma_z^2\left[\phi_{12}(0)^2+\phi_{12}(1)^2+\cdots+\phi_{12}(n-1)^2\right] σy(n)2=σy2[ϕ11(0)2+ϕ11(1)2++ϕ11(n1)2]+σz2[ϕ12(0)2+ϕ12(1)2++ϕ12(n1)2]

  • thus, it’s possible to decompose the n n n-step-ahead forecast error variance into proportions due to innovations in { ϵ y t } \{\epsilon_{y t}\} {ϵyt} and { ϵ z t } \{\epsilon_{z t}\} {ϵzt} respectively

    σ y 2 [ ϕ 11 ( 0 ) 2 + ϕ 11 ( 1 ) 2 + ⋯ + ϕ 11 ( n − 1 ) 2 ] σ y ( n ) 2 σ z 2 [ ϕ 12 ( 0 ) 2 + ϕ 12 ( 1 ) 2 + ⋯ + ϕ 12 ( n − 1 ) 2 ] σ y ( n ) 2 \begin{gathered}\frac{\sigma_y^2\left[\phi_{11}(0)^2+\phi_{11}(1)^2+\cdots+\phi_{11}(n-1)^2\right]}{\sigma_y(n)^2} \\ \frac{\sigma_z^2\left[\phi_{12}(0)^2+\phi_{12}(1)^2+\cdots+\phi_{12}(n-1)^2\right]}{\sigma_y(n)^2}\end{gathered} σy(n)2σy2[ϕ11(0)2+ϕ11(1)2++ϕ11(n1)2]σy(n)2σz2[ϕ12(0)2+ϕ12(1)2++ϕ12(n1)2]

Reference

  1. Enders, Walter. “Applied Econometric Time Series. 2th ed.” New York (US): University of Alabama (2004).

你可能感兴趣的:(金融数据分析,人工智能,python,算法)