PcGive Volume II: Modelling Dynamic Systems

These reference chapters have been taken from Volume II, and use the same chapter and section numbering as the printed version.

Table of contents
  Part:6 The Statistical Output of Multiple Equation Models
15 Unrestricted System
  15.1 Introduction
  15.2 System formulation
  15.3 System estimation
  15.4 System output
  15.4.1 Equation output
  15.4.2 Summary statistics
  15.4.3 F-tests
  15.4.4 Correlations
  15.4.5 1-step (ex post) forecast analysis
  15.4.6 *Information criteria
  15.4.7 *Correlation matrix of regressors
  15.4.8 *Covariance matrix of estimated parameters
  15.4.9 *Static (1-step) forecasts
  15.5 Graphic analysis
  15.6 Recursive graphics
  15.7 Dynamic forecasting
  15.8 Dynamic simulation and impulse responses
  15.9 Dynamic analysis
  15.9.1 I(1) cointegration analysis
  15.9.2 I(2) Cointegration analysis
  15.10 System testing
  15.10.1 Introduction
  15.10.2 Single equation diagnostics
  15.10.3 Vector tests
  15.10.4 Testing for general restrictions
  15.11 Progress
16 Cointegrated VAR
  16.0.1 Cointegration restrictions
  16.1 Cointegrated VAR output
  16.2 Graphic analysis
  16.3 Recursive graphics
17 Simultaneous Equations Model
  17.1 Model estimation
  17.2 Model output
  17.3 Graphic analysis
  17.4 Recursive graphics
  17.5 Dynamic analysis, forecasting and simulation
  17.6 Model testing

Part:6 The Statistical Output of Multiple Equation Models

Chapter 15 Unrestricted System

15.1 Introduction

This part explains the statistics computed and reported by PcGive for dynamic systems (this chapter), cointegration tests (§15.9.1), cointegrated VAR analysis (Chapter 16), and model analysis (Chapter 17).

A brief summary of the underlying mathematics is given in this chapter. The order is similar to that in the computer program. We first briefly describe system formulation in §15.2 to establish notation, then system estimation in §15.3, followed by estimation output §15.4 and graphic evaluation in §15.5, and dynamic analysis and I(1) and I(2) cointegration tests in §15.9. Section 15.10 considers testing, both at the single equation level as well as at the system level. Sections 15.9.1--16.0.1 discuss estimating the cointegrating space, related graphs and tests of restrictions on the space. Finally §15.11 considers the progress made during system and model development.

15.2 System formulation

In PcGive, a linear system, often called the unrestricted reduced form (URF), takes the form:

yt=∑i=1mπiyt-i+∑j=0rπm+j+1zt-j+vt for t=1,...,T,

where yt, zt are respectively n×1 and q×1 vectors of observations at time t on the endogenous and non-modelled variables. The {πi} are unrestricted, except perhaps for columns of zeros, which would exclude certain yt-i or zt-j from the system. Hence each equation in the system has the same variables on the right-hand side. The orders m and r of the lag polynomial matrices for y and z should be specified so as to ensure that {vt} is an innovation process against the available information when the {πi} matrices are constant over t. Given a data set xt , then yt is defined as the vector of endogenous variables and ( zt...zt-r) must be set as non-modelled (so they need to be at least weakly exogenous for the {πi}). A system in PcGive is formulated by:

  1. which variables yt,zt are involved;

  2. the orders m, r of the lag polynomials;

  3. classification of the ys into endogenous variables and identity (endogenous) variables;

  4. any non-modelled variable may be classified as restricted or as unrestricted (Constant, Seasonals and Trend are labelled as such by default). The latter variables are separately estimated in FIML and CFIML to reduce the dimensionality of the parameter space. Their coefficients are estimated from a prior regression.

A vector autoregression (VAR) arises when there are no z variables in the statistical system (eq:15.1) (q=0, but there could be a constant, seasonals or trend) and all y have the same lag length (no columns of π are zero).

Integrated systems can be transformed to equilibrium correction form, where all endogenous variables and their lags are transformed to differences, apart from the first lag:

Δyt=∑i=1m-1δiΔyt-i+P0yt-1+∑j=0rπm+j+1zt-j+vt for t=1,...,T.

Returning to the notation of (eq:15.1), a more compact way of writing the system is:


where w contains z, lags of z, and lags of y: wt'=( yt-1',...,yt-m',zt',...,zt-r') . This can be further condensed by writing Y'=( y1 y2...yT) , and W', V' correspondingly:


in which Y' is (n×T), W' is (k×T) and Π is (n×k).

15.3 System estimation

Since the {πi} are unrestricted (except perhaps for excluding elements from wt) the system (eq:15.1) can be estimated by multivariate least squares, either directly (OLS) or recursively (often denoted RLS). These estimators are straightforward multivariate extensions of the single equation methods. Analogously, estimation of (eq:15.1) requires vt~IDn(0,Ω), where Ω is constant over time. However, Ω may be singular owing to identities linking elements of xt, and these are handled by estimating only the subset of equations corresponding to stochastic endogenous variables. If vt~INn[0,Ω], OLS coincides with MLE; for notation, we note that the estimated coefficients are:

Π̂ '=( W'W) -1W'Y,

with residuals:

V̂'=Y'-Π̂ W'

and estimated covariance matrix:

V[ vecΠ̂ '] ̂=Ω̃ ⊗( W'W) -1,


Ω̃ ='/( T-k) .

In the likelihood-based statistics, we shall scale by T:

Ω̂ ='/T.

15.4 System output

A listing of the system output now follows. Items marked with a * are only printed on request, either automatically through settings in the Options dialog, or by using Further Output.

15.4.1 Equation output

  1. URF coefficients and standard errors

    The coefficients Π̂ , and their standard errors √( V[vecΠ̂ ']̂) ii. Any variables marked as unrestricted appear here too.

  2. t-value and t-probability

    These statistics are conventionally calculated to determine whether individual coefficients are significantly different from zero:

    π̂ ij

    SE[ π̂ ij]

    where the null hypothesis H0 is πij=0. The null hypothesis is rejected if the probability of getting a value at least as large is less than 5%(or any other chosen significance level). This probability is given as:

    t-prob=1-Prob(| τ| ≤| t-value | ) ,

    in which τ has a Student t-distribution with T-k degrees of freedom.

    When H0 is true (and the model is otherwise correctly specified), a Student t-distribution is used since the sample size is often small, and we only have an estimate of the parameter's standard error: however, as the sample size increases, τ tends to a standard normal distribution under H0. Large values of t reject H0; but, in many situations, H0 may be of little interest to test. Also, selecting variables in a model according to their t-values implies that the usual (Neyman-Pearson) justification for testing is not valid (see Judge, Griffiths, Hill, Lütkepohl, and Lee, 1985, for example).

  3. Equation standard error (σ̃ ) and residual sum of squares (RSS)

    The square root of the residual variance for each equation:

    (Ω̃ ii)½ for i=1,...n.

    The RSS is ( T-k) Ω̃ ii, that is, the diagonal elements of '.

15.4.2 Summary statistics

The log-likelihood value is (including the constant Kc):

l̂ =- T/2 log |Ω̂ | - Tn/2 (1 + log 2π).

Then l constitutes the highest attainable likelihood value in the class (eq:15.4) (unless either the set of variables or the lag structure is altered), and hence is the statistical baseline against which simplifications can be tested. In textbook econometrics, (eq:15.4) is called the unrestricted reduced form (URF) and is usually derived from a structural representation. Here, the process is reversed: the statistical system (eq:15.4) is first specified and tested for being a congruent representation; only then is a structural (parsimonious) interpretation sought. If, for example, (eq:15.4) is not congruent, then (eq:15.13) is not a valid baseline, and subsequent tests will not have appropriate distributions. In particular, any just-identified structural representation has the same likelihood value as (eq:15.4), and hence will be invalid if (eq:15.4) is invalid: the `validity' of imposing further restrictions via a model is hardly of interest.

Define as Y after removing the effects of the unrestricted variables, and let:

Ω̂ 0=( '/T) -1.

PcGive reports:

  1. the log-likelihood (eq:15.13);

  2. - T/2 log | Ω̂ | ;

  3. | Ω̂ | ;

  4. T, the number of observations used in the estimation, and nk, the number of parameters in all equations;

  5. log | '/T| = log | Ω̂ 0| .

Various measures of the goodness of fit of a system can be calculated. The two reported by PcGive are:

  1. R2(LR)

    Reports Rr2=1-| Ω̂ | | Ω̂ 0| , which is an R2 based on the likelihood-ratio principle. For a single equation system this statistic is identical to:

  2. R2(LM)

    Reports Rm2=1- 1/n tr( Ω̂ Ω̂ 0) , which derives from the Lagrange Multiplier principle.

Note that these are relative to the unrestricted variables. Both measures coincide with the traditional R2 in a single equation, provided that the constant is the only unrestricted variable.

15.4.3 F-tests

Significance at 5%is marked with a *, at 1%with **. Reported are:

  1. F-tests against unrestricted regressors

    This uses Rao's F-approximation to test the significance of Rr2, which amounts to testing the null hypothesis that all coefficients are zero, except those on the unrestricted variables. In a single-equation system, with only the constant unrestricted, this is identical to the reported F-statistic.

  2. F-tests on retained regressors

    F-tests are shown for the significance of each column of Π̂ together with their probability values (inside square brackets) under the null hypothesis that the corresponding column of coefficients is zero. So these test whether the variable at hand is significant in the system. The statistics are F( n,T-k+1-n) .

Further F-tests of general to specific system modelling are available through the progress report: see §15.11.

15.4.4 Correlations

15.4.5 1-step (ex post) forecast analysis

This is only reported when observations are withheld for static forecasting when the sample size is selected.

The 1-step forecast errors (from T+1 to T+H) are defined as:

eT+i=yT+i-Π̂ wT+i=( Π-Π̂ ) wT+i+vT+i

with estimated variance

V[ eT+i] ̃=Ω̃ ( 1+wT+i'( W'W) -1wT+i) =Ψ̃ T+i.

The forecast error variance matrix for a single step-ahead forecast is made up of a term for coefficient uncertainty and a term for innovation errors. Three types of parameter constancy tests are reported, in each case as a χ2(nH) for n equations and H forecasts and an F( nH,T-k) statistic:

  1. using Ω.

    This is an index of numerical parameter constancy, ignoring both parameter uncertainty and intercorrelation between forecasts errors at different time periods.

  2. using V[e].

    This test is similar to (a), but takes parameter uncertainty into account.

  3. using V[E].

    Here, V[E] is the full variance matrix of all forecast errors E, which takes both parameter uncertainty and inter-correlations between forecast errors into account.

15.4.6 *Information criteria

The four statistics reported are the Schwarz criterion (SC), the Hannan--Quinn criterion (HQ), the Final Prediction Error (FPE) and the Akaike criterion (AIC). These can be defined as:

SC= log | Ω̂ | + k log ( T) T-1,
HQ= log | Ω̂ | + 2k log ( log (T)) T-1,
AIC= log | Ω̂ | + 2k T-1,
FPE = ( T+k) | Ω̂ |/( T-k) .

Or, in terms of the log-likelihood:

SC=(-2 l̂ + k log T) T-1,
HQ=(-2 l̂ + 2k log log T) T-1,
AIC=(-2 l̂ + 2k) T-1,
FPE = - T+k/T-k 2/T l̂.

When using Further Output will first report (eq:15.18) followed by (eq:15.19). In the latter, the constant is included in the likelihood, resulting in different outcomes. In all other cases, PcGive will only report the values based on (eq:15.19). For a discussion of the use of these and related scalar measures to choose between alternative models in a class, see Judge, Griffiths, Hill, Lütkepohl, and Lee (1985) or Lütkepohl (1991).

15.4.7 *Correlation matrix of regressors

This reports the sample means and sample standard deviations of the selected variables, followed by the correlation matrix.

15.4.8 *Covariance matrix of estimated parameters

The k×k variance-covariance matrix of the estimated parameters. Along the diagonal, we have the variance of each estimated coefficient, and off the diagonal, the covariances.

15.4.9 *Static (1-step) forecasts

Reports the individual forecasts with forecast error standard errors. If the actual values are available, the forecast error and t-value are also printed.

Additional statistics are reported if more than two forecast errors are available:

  1. Mean of the forecast errors;
  2. Standard deviation of the forecast errors;
  3. Forecast tests, single chi2( .)

    These are the individual test statistics underlying ξ1 and ξ2 above, for i=1,...,H:

    using Ω eT+i'Ω̃ -1eT+i,
    using V[ e] eT+i'Ψ̃ T+i-1eT+i,

    this time distributed as χ2(n). They can also be viewed graphically.

  4. Root Mean Square Error:

    RMSE = [ 1/H ∑t=1H (yt-ft)2]1/2,

    where the forecast horizon is H, yt the actual values, and ft the forecasts.

  5. Mean Absolute Percentage Error:

    MAPE = 100/H ∑t=1H |


RMSE and MAPE are measures of forecast accuracy, see, e.g. Makridakis, Wheelwright, and Hyndman (1998, Ch. 2). Note that the MAPE can be infinity if any yt=0, and is different when the model is reformulated in differences. For more information see Clements and Hendry (1998a).

15.5 Graphic analysis

Graphic analysis focuses on graphical inspection of individual equations. Let yt, ŷt denote respectively the actual (that is, observed) values and the fitted values of the selected equation, with residuals v̂t = yt-ŷt, t=1,...,T. If H observations are retained for forecasting, then ŷT+1,...,ŷT+H are the 1-step forecasts.

Many different types of graph are available:

  1. Actual and fitted values

    This is a graph showing the fitted ( ŷt) and actual values ( yt) of the dependent variable over time, including the forecast period.

  2. Cross-plot of actual and fitted

    t against yt, including the forecast period.

  3. Residuals (scaled)

    ( v̂t/σ̃ ) , where σ̃ 2 is the estimated equation error variance, plotted over t=1,...,T+H.

  4. Forecasts and outcomes

    The 1-step forecasts can be plotted in a graph over time: yt and ŷt, t=T+1,...,T+H, are shown with error bars of ±2SE( et) , and centred on ŷt (that is, an approximate 95%confidence interval for the 1-step forecast). Corresponding to (eq:15.16) the forecast errors are et=yt-ŷt and SE[ et] is derived from (eq:15.17). The error bars can be replaced by bands, set in Options, and the number of pre-forecast observations can be selected.

  5. Residual density and histogram

    Plots the histogram of the standardized residuals, the estimated density fv(.)̂ and a normal distribution with the same mean and variance (more details are in the OxMetrics book).

  6. Residual autocorrelations (ACF)

    This plots the series {rj} where rj is the correlation coefficient between v̂t and v̂t-j. The length of the correlogram is specified by the user, leading to a Figure that shows ( r1,r2,...,rs) plotted against ( 1,2,...,s) where for any j:

    t=j+1T( vt-v0) ( vt-j-vj)

    t=jT( vt-v)2

    where v is the sample mean of vt.

  7. Residual partial autocorrelations (PACF)

    This plots the Partial autocorrrelation function (see the OxMetrics book).

  8. Forecasts Chow tests

    These are the Chow tests using V[ e] of (eq:15.20), available from T+1 to T+H, together with a fixed 5% critical value from χ2( n) . These are not scaled by their critical values, unlike the graphs in recursive graphics.

  9. Residuals (unscaled)

    ( v̂t) over t;

  10. Residual spectrum

    This plots the estimated spectral density (see the OxMetrics book) using v̂t as the xt variable.

  11. Residual QQ plot against N(0,1)

    Shows a QQ plot of the residuals.

  12. Residual density optionally with Histogram

    The histogram of the scaled residuals and the non-parametrically estimated density fv(.)̂ are graphed using the settings described in the OxMetrics book.

  13. Residual distribution (normal quantiles)

    Plots the distribution based on the non-parametrically estimated density.

  14. Residual cross-plots

    Let v̂it, v̂jt denote the residuals of equation i and j. This graph shows the cross-plot of v̂it against v̂jt for all marked equations (i≠j), over t=1,...,T+H.

The residuals can be saved to the database for further inspection.

15.6 Recursive graphics

When recursive OLS (RLS) is selected, the Π matrix is estimated at each t  ( 1≤M≤t≤T) where M is user-selected. Unlike previous versions, there is no requirement that k≤M. So OLS is used for observations 1...M-1, RLS for M...T. The calculations proceed exactly as for the single equation case since the formulae for updating are unaffected by Y being a matrix rather than a vector. Indeed, the relative cost over single equation RLS falls; but the huge number of statistics (nk( T-M+1) coefficients alone) cannot be stored in PcGive. Consequently, the graphical output omits coefficients and their t-values. Otherwise the output is similar to that in single equations, but now available for each equation in the system. In addition, system graphs are available, either of the log likelihood, or of the system Chow tests. At each t, system estimates are available, for example coefficients Πt and residuals vt=yt-Πtwt. Unrestricted variables have their coefficients fixed at the full sample values. Define Vt' as ( v1v2...vt) and let yt, vt, wt denote the endogenous variable, residuals and regressors of equation i at time t.

The following graphs are available for the system (the information can be printed on request):

  1. Residual sum of squares

    The residual sum of squares RSSt for equation i is the ith diagonal element of t't for t=M,...,T.

  2. 1-Step Residuals ±2σ̃ for equation i at each t:

    The 1-step residuals v̂t are shown bordered by 0±2σ̃ t over M,...,T. Points outside the 2-standard-error region are either outliers or are associated with coefficient changes.

  3. Log-likelihood/T

    log | t/T Ω̂ t|=-
    log |T-1t't|, t=M,...,T.

    Per definition: l̂t≥l̂t+1. This follows from the fact that both can be derived from a system estimated up to t+1, where l̂t obtains from the system with a dummy for the last observation, so that l̂t+1 is the restricted likelihood. On the other hand: l̂ t!≥l̂ t+1, as this would still require the sample size correction as employed in l̂t. Note that the constant is excluded from the log-likelihood here.

  4. Single equation chow Tests

    1. 1-step F-tests (1-step Chow-tests)

      1-step forecast tests are F( 1,t-k-1) under the null of constant parameters, for t=M,...,T. A typical statistic is calculated as:

      ( RSSt-RSSt-1) ( t-k-1)


      Normality of yt is needed for this statistic to be distributed as an F.

    2. Break-point F-tests (N↓-step Chow-tests)

      Break-point F-tests are F( T-t+1,t-k-1) for t=M,...,T. These are, therefore, sequences of Chow tests and are called N↓ because the number of forecasts goes from T-M+1 to 1. When the forecast period exceeds the estimation period, this test is not necessarily optimal relative to the covariance test based on fitting the model separately to the split samples. A typical statistic is calculated as:

      ( RSST-RSSt-1) ( t-k-1)

      RSSt-1( T-t+1)

      This test is closely related to the CUSUMSQ statistic in Brown, Durbin, and Evans (1975).

    3. Forecast F-tests. (N↑-step Chow-tests)

      Forecast F-tests are F( t-M+1,M-k-1) for t=M,...,T, and are called N↑ as the forecast horizon increases from M to t. This tests the model over 1 to M-1 against an alternative which allows any form of change over M to T. Thus, unless M>k, blank graphs will result. A typical statistic is calculated as:

      ( RSSt-RSSM-1) ( M-k-1)

      RSSM-1( t-M+1)

  5. System Chow tests

    1. 1-step F-tests (1-step Chow-tests)

      This uses Rao's F-approximation, with the R2 computed as:

      1- exp ( -2l̂t-1+2l̂t) ,  t=M,...,T.

    2. Break-point F-tests (N↓-step Chow-tests)

      This uses Rao's F-approximation, with the R2 computed as:

      1- exp ( -2l̂t-1+2l̂T) ,  t=M,...,T.

    3. Forecast F-tests (N↑-step Chow-tests)

      This uses Rao's F-approximation, with the R2 computed as:

      1- exp ( -2l̂M-1+2l̂t) ,  t=M,...,T.

The statistics in (4) and (5) are variants of Chow (1960) tests: they are scaled by one-off critical values from the F-distribution at any selected probability level as an adjustment for changing degrees of freedom, so that the significance values become a straight line at unity. Selecting a probability of 0 or 1 results in unscaled statistics. Note that the first and last values of (eq:15.23) respectively equal the first value of (eq:15.25) and the last value of (eq:15.24); the same relation holds for the system tests. When the system tests of (5) are computed for a single equation system, they are identical to the tests computed under (4).

15.7 Dynamic forecasting

Dynamic (or multi-period or ex ante) system forecasts can be graphed. Commencing from period T as initial conditions:

t=∑i=1mπ̂ it-i+∑j=0rπ̂ j+m+1zt-j  for t=T+1,...,T+H,
where t-i=yt-i for t-i≤T.

Such forecasts require data on ( zT+1...zT+H) for H-periods ahead (but the future values of ys are not needed), and to be meaningful also require that zt is strongly exogenous and that Π remains constant. Dynamic forecasts can be viewed with or without `error bars' (or bands) based on the equation error-variance matrix only, where the variance estimates are given by the (n ×n) top left block of:

V[ e*T+1] ̃=Õ ,
V[ e*T+2] ̃=Õ +Õ D̂',
V[ e*T+H] ̃=∑i=0H-1iÕ D̂i'.

Optionally, parameter uncertainty can be taken into account when computing the forecast error variances (but not for h-step forecasts): this is allowed only when there are no unrestricted variables. Using the companion matrix, which is (nm×nm):

π̂ 1 π̂ 2 ...π̂ m-1 π̂ m
In 0 ...0 0
0 In ...0 0
0 0 ...In 0
)   and  Õ =(
Ω̃ 0 ...
0 0 ...
) ,

Thus, uncertainty owing to the parameters being estimated is presently ignored (compare this to the parameter constancy tests based on the 1-step forecasts). If q>0, the non-modelled variables could be perturbed as in `scenario studies' on non-linear models, but here with a view to assessing the robustness or fragility of ex ante forecasts to possible changes in the conditioning variables. If such perturbations alter the characteristics of the {zt} process, super exogeneity is required.

It is also possible to graph h-step forecasts, where h≤H. This uses (eq:15.29), but with:

t-i=yt-i  for  t-i≤ max ( T,t-h) .

To be consistent with the definition of h-step forecasts, what is graphed is the sequence of 1,...,h-1 step forecasts for T+1,...T+h-1, followed by h-step forecasts from T+h,...T+H. In other words: up to T+h are dynamic forecasts, from then on h-step forecasts up to T+H. Thus, unless there is available data not used in estimation, the h-step forecasts are just dynamic forecasts up to the value of h: this data can be reserved by using a shorter estimation sample, or setting a non-zero number of forecasts. After h forecasts, the forecast error variance remains constant at ∑i=0h-1iÕ D̂i'. For example, 1-step forecasts use t-i=yt-i for t-i≤ max ( T,t-1) , and hence never use forecasted values (in this case max ( T,t-1) =t-1, as t≥T+1). The 1-step forecast error variance used here is Ω̃ , which differs from (eq:15.17) in that it ignores the parameter uncertainty. Selecting h=H yields the dynamic forecasts.

To summarize, the following graphs are available:

  1. Dynamic forecasts over any selected horizon H for closed systems, and the available sample for open (with non-modelled variables, but no identities).

    Graphs can be without standard errors; or standard errors can be plotted as the forecast ±2SE, either error-variance based or parameter-variance based (i.e., full estimation uncertainty, if no unrestricted variables).

  2. h-step forecasts up to the end of the available sample, which is dependent on the presence or absence of non-modelled variables. In the former case, data must exist on the non-modelled variables; in the latter, the horizon H≥h is at choice.

    Graphs can be without standard errors; or error-variance based standard errors can be plotted as the forecast ±2SE.

The forecasts and standard errors can be printed. Although dynamic forecasting is not available for a system with identities, it can be obtained by mapping the system to a model of itself and specifying the identity equations.

The remaining forecast options, as discussed in Volume I, extend in a straightforward manner to multivariate models.

15.8 Dynamic simulation and impulse responses

The system can be dynamically simulated from any starting point within sample, ending at the last observation used in the estimation. Computation is as in (eq:15.29), with T+1 replaced by the chosen within-sample starting point. Given the critiques of dynamic simulation as a method of evaluating econometric systems in Hendry and Richard (1982) and Chong and Hendry (1986), this option is included to allow users to see how misleading simulation tracks can be as a guide to selecting systems. Also see Pagan (1989). Like dynamic forecasting, dynamic simulation is not available for a system with identities, but can be obtained by mapping the system to a model of itself and specifying the identity equations.

Let yt, ŷt, ŝt denote respectively the actual (that is, observed) values, the fitted values (from the estimation) and the simulated values of the selected equation, t=M,...,M+H. H is the number of simulated values, starting from M, 1≤M<T.

Four different types of graph are available:

  1. Actual and simulated values

    This is a graph showing the simulated ( ŝt) and actual values ( yt) of the dependent variable over time.

  2. Actual and simulated cross-plot

    Cross-plot of ŷt against ŝt.

  3. Fitted and simulated values

    t and ŝt against time.

  4. Simulation residuals

    Graphs ( yt-ŝt) over time.

Impulse response analysis disregards the non-modelled variables and sets the history to zero, apart from the initial values i1:

t=∑i=1mπ̂ it-i  for t=2,...,H,

where 1 are the initial values, and t=0 for t≤0. This generates n2 graphs, where the jth set of n graphs gives the response of the n endogenous variables to the jth initial values. These initial values i1,j for the jth set of graphs can be chosen as follows:

  1. unity

    i1,j=ej: 1 for the jth variable, 0 otherwise.

  2. standard error

    i1,j=σ̃ j: the jth residual standard error for the jth variable, 0 otherwise.

  3. orthogonalized

    Take the Choleski decomposition of Ω̃ , Ω̃ =PP', so that P=( p1...pn) has zeros above the diagonal. The orthogonalized initial values are i1,j=pj. Thus, the outcome depends on the ordering of the variables.

Graphing is optionally of the accumulated response: ∑t=1ht, h=1,...,H.

15.9 Dynamic analysis

After estimation, a dynamic analysis of the unrestricted reduced form (system) can be performed. Consider the system (eq:15.2), but replace the πm+j+1 by Γj:

yt=∑i=1mπiyt-i+∑j=0rΓjzt-j+vt,  vt~INn[0,Ω],

with yt (n×1) and zt ( q×1) . Use the lag operator L, defined as Lyt=yt-1, to write this as:

( I-π( L) ) yt=Γ( L) zt+vt.

So π(1)=π1+...+πm, with m the longest lag on endogenous variable(s); and Γ(1)=Γ0+...+Γr, with r the longest lag on non-modelled variable(s). 0=π̂ (1)-In can be inverted only if it is of rank p=n, in which case for q>0, y and z are fully cointegrated. If p<n, only a subset of the ys and zs are cointegrated. If 0 can be inverted, we can write the estimated static long-run solution as:

=-0-1Γ̂ ( 1) z.

If q=0, the system is closed (that is, a VAR), and (eq:15.36) is not defined. However, P0 can still be calculated, and then p characterizes the number of cointegrating vectors linking the ys. We use + to denote that the outcome is reported only if there are non-modelled variables.

If there are no identities PcGive computes:

  1. Long-run matrix Pi(1)-I = Po: π̂ (1)-In=0;

  2. Long-run covariance: 0-1Ω̃ P̂0-1;

  3. Static long run: -0-1Γ̂ (1);+

  4. Standard errors of static long run;+

  5. Mean-lag matrix ∑πi: Ψ̂ =∑i=1miπ̂ i: only shown if there is more than one lag.

  6. Eigenvalues of long-run matrix: eigenvalues of π̂ (1)-In;

  7. I(2) matrix Gamma: the long-run matrix of the system in equilibrium correction form;

  8. Eigenvalues of companion matrix, , given in (eq:15.31).

Dynamic analysis is not available for a system with identities, but are available again after estimating the simultaneous equations model.

15.9.1 I(1) cointegration analysis

When the system is closed in the endogenous variables, express P0 in (eq:15.2) as αβ', where α and β are ( n×p) matrices of rank p. Although vt~INn[0,Ω], and so is stationary, the n variables in yt need not all be stationary. The rank p of P0 determines how many linear combinations of yt are stationary. If p=n, all variables in yt are stationary, whereas p=0 implies that Δyt is stationary. For 0<p<n, there are p cointegrated (stationary) linear combinations of yt. The rank of P0 is estimated using the maximum likelihood method proposed by Johansen (1988), summarized here.

First, partial out from Δyt and yt-1 in (eq:15.2) the effects of the lagged differences ( Δyt-1...Δyt-m+1) and any variables classified as unrestricted (usually the Constant or Trend, but any other variable is allowed as discussed below). This yields the residuals R0t and R1t respectively. Next compute the second moments of all these residuals, denoted S00, S01 and S11 where:

Sij= 1/T ∑t=1TRitRjt' for i, j=0,1.

Now solve |λS11-S10S00-1S01|=0 for the p largest eigenvalues 1>λ̂ 1>...>λ̂ p...>λ̂ n>0 and the corresponding eigenvectors:

β̂ =( β̂ 1,...,β̂ p) normalized by β̂ 'S11β̂ =Ip.

Then, tests of the hypothesis of p cointegrating vectors can be based on the trace statistic:

ηp=-T∑i=p+1n log ( 1-λ̂ i).

The cointegrating combinations β'yt-1 are the I(0) linear combinations of the I(1) variables which can be used as equilibrium correction mechanisms (ECMs).

Any non-endogenous variables zt can enter in two ways:

  1. Unrestricted: they are partialled out prior to the ML procedure: denote these qu variables by zut.

  2. Restricted: the qr variables zrt are forced to enter the cointegrating space, which can then be written as β'(yt-1:zt-1r), with β' now a (p×(n+qr)) matrix.

If lagged values zt, ...,zt-m enter, reparametrize as Δztu, ...,Δzt-m+1u,zt-1r.

Output of the I(1) cointegration analysis is:

  1. Eigenvalues λ̂ i and the log-likelihood for each rank:

    lc*=Kc - T/2 log | S00| - T/2 ∑i=1p log ( 1-λ̂ i) ,  p=0,...,n.

  2. Sequence of Trace test statistics

    The test statistics ηp for H(rank ≤p) are listed with p-values based on Doornik (1998); * and ** mark significance at 95% and 99%. Testing commences at H(rank = 0), and stops at the first insignificant statistics.

    The asymptotic p-values are available for the following cases:

    Hypothesis Constant Trend
    Hql(p)unrestricted unrestricted
    Hl(p)unrestricted restricted
    Hlc(p)unrestricted none
    Hc(p)restricted none
    Hz(p)none none
    Strictly speaking, new critical values should be computed for all other cases.

  3. β̂ eigenvectors, standardized on the diagonal.

  4. α̂ coefficients, corresponding to the standardized β̂.

  5. Long-run matrix 0=α̂ β̂ ', rank n.

15.9.2 I(2) Cointegration analysis

In the I(2) analysis, which requires at least two lags of the endogenous variables, there is potentially an additional reduced rank restriction on the long-run matrix Γ of the model in first differences (equilibrium correction form):

α'Γβ = ξ'η,

where ξ and η are (n-p) ×s matrices. In the analysis we have the parameter counts

s the number of I(1) relations,
n-p-s the number of I(2) relations,
p the rank of P0.

The test statistics are

Qp: H(rank(P0)≤p | rank(P0)≤n),
Sp,s: H(rank(P0)<=p and n-p-s I(2) components | rank(P0)≤n).

For example, when n=4 PcGive will print a table consisting of eigenvalues λp+1, λp+1,s+1, test statistics Qp and Sp,s, and p-values for p=0,...,3 and n-p-s=1,...,4 as follows:

n-p-s=0 λ̂1λ̂2λ̂3λ̂4
Qp Q0Q1Q2Q3
[pval] pppp
p = 0 μ̂1,1μ̂1,2μ̂1,3μ̂1,4
Sp,s S0,0S0,1S0,2S0,3
[pval] pppp
p = 1 -μ̂2,2μ̂2,3μ̂2,4
Sp,s -S1,1S1,2S1,3
[pval] -ppp
p = 2 --μ̂3,3μ̂3,4
Sp,s --S2,2S2,3
[pval] --pp
p = 3 ---μ̂3,4
Sp,s ---S3,3
[pval] ---p

Hypothesis testing normally proceeds down the columns, from top left to bottom right, stopping at the first insignificant test statistic.

15.10 System testing

15.10.1 Introduction

Many test statistics in PcGive have either a χ2 distribution or an F distribution. F-tests are usually reported as:

F(num,denom)  =  Value  [Probability]  /*/**

for example:

F(1, 155)     =  5.0088 [0.0266] *

where the test statistic has an F-distribution with one degree of freedom in the numerator and 155 in the denominator. The observed value is 5.0088, and the probability of getting a value of 5.0088 or larger under this distribution is 0.0266. This is less than 5%but more than 1%, hence the star. Significant outcomes at a 1%level are shown by two stars.

χ2 tests are also reported with probabilities, as for example:

Normality Chi2(2)= 2.1867 [0.3351]

The 5%χ2 critical values with two degrees of freedom is 5.99, so here normality is not rejected (alternatively, Prob(χ2≥ 2.1867) = 0.3351, which is more than 5%).

The probability values for the F-test are calculated using an algorithm based on Majunder and Bhattacharjee (1973a) and Cran, Martin, and Thomas (1977).

[Note: As recommended in Cran, Martin, and Thomas (1977), the approach in Pike and Hill (1966) is used for the logarithm of the gamma function.]

Those for the χ2 are based on Shea (1988). The significance points of the F-distribution derive from Majunder and Bhattacharjee (1973b).

Some tests take the form of a likelihood ratio (LR) test. If l is the unrestricted, and l0 the restricted, log-likelihood, then under the null hypothesis that the restrictions are valid, -2(l0-l) has a χ2(s) distribution, with s the number of restrictions imposed (so model l0 is nested in l).

Many diagnostic tests are done through an auxiliary regression. In the case of single-equation tests, they take the form of TR2 for the auxiliary regression, so that they are asymptotically distributed as χ2( s) under their nulls, and hence have the usual additive property for independent χ2s. In addition, following Harvey (1990) and Kiviet (1986), F-approximations of the form:



~F( s,T-k-s)

are calculated because they may be better behaved in small samples.

Whenever the vector tests are implemented through an auxiliary multivariate regression, PcGive uses vector analogues of the χ2 and F statistics. The first is an LM test in the auxiliary system, defined as TnRm2, the second uses the F approximation based on Rr2. The vector tests reduce to the single-equation tests in a one-equation system. All tests are summarized below.

15.10.2 Single equation diagnostics

Diagnostic testing in PcGive is performed at two levels: individual equations and the system as a whole. Individual equation diagnostics take the residuals from the system, and treat them as from a single equation, ignoring that they form part of a system. Usually this means that they are only valid if the remaining equations are problem-free.

  1. Portmanteau statistic

    This is a degrees-of-freedom corrected version of the Box and Pierce (1970) statistic. It is only a valid test in a single equation with strongly exogenous variables. If s is the chosen lag length and m the lag length of the dependent variable, values ≥2(s-m) could indicate residual autocorrelation. Conversely, small values of this statistic should be treated with caution as residual autocorrelations are biased towards zero when lagged dependent variables are included in econometric equations. An appropriate test for residual autocorrelation is provided by the LM test for autocorrelated residuals. The autocorrelation coefficients rj, see (eq:15.21), are also reported.

  2. LM test for autocorrelated residuals

    This test is performed through the auxiliary regression of the residuals on the original variables and lagged residuals (missing lagged residuals at the start of the sample are replaced by zero, so no observations are lost). Unrestricted variables are included in the auxiliary regression. The null hypothesis is no autocorrelation, which would be rejected if the test statistic is too high. This LM test is valid for systems with lagged dependent variables and diagonal residual autocorrelation, whereas neither the Durbin--Watson nor the residual autocorrelations provide a valid test in that case. The χ2 and F-statistic are shown, as are the error autocorrelation coefficients, which are the coefficients of the lagged residuals in the auxiliary regression.

  3. LM test for autocorrelated squared residuals

    This is the ARCH test (AutoRegressive Conditional Heteroscedasticity: see Engle, 1982) which in the present form tests the joint significance of lagged squared residuals in the regression of squared residuals on a constant and lagged squared residuals. The χ2 and F-statistic are shown, in addition to the ARCH coefficients, which are the coefficients of the lagged squared residuals in the auxiliary regression.

  4. Test for normality

    This is the test proposed by Doornik and Hansen (1994), and amounts to testing whether the skewness and kurtosis of the residuals correspond to those of a normal distribution. Before reporting the actual test, PcGive reports the following statistics of the residuals: mean (0 for the residuals), standard deviation, skewness (0 in a normal distribution), excess kurtosis (0 in a normal distribution), minimum and maximum.

  5. Test for heteroscedasticity

    This test is based on White (1980), and involves an auxiliary regression of the squared residuals on the original regressors and all their squares. The null is unconditional homoscedasticity, and the alternative is that the variance of the error process depends on the regressors and their squares. The output comprises TR2, the F-test equivalent, and the coefficients of the auxiliary regression plus their individual t-statistics to help highlight problem variables. Unrestricted variables are excluded from the auxiliary regression, but a constant is always included. Variables that are redundant when squared or collinear are automatically removed.

15.10.3 Vector tests

The present incarnation of PcGive has various formal system mis-specification tests for within-sample congruency.

  1. Vector portmanteau statistic

    This is the multivariate equivalent of the single-equation portmanteau statistic (again using a small-sample correction), and only a valid asymptotic test in a VAR.

  2. Vector error autocorrelation test

    Lagged residuals (with missing observations for lagged residuals set to zero) are partialled out from the original regressors, and the whole system is re-estimated, providing a Lagrange-multiplier test based on comparing the likelihoods for both systems.

  3. Vector normality test

    This is the multivariate equivalent of the aforementioned single equation normality test. It checks whether the residuals at hand are normally distributed as:


    by checking their skewness and kurtosis. A χ2(2n) test for the null hypothesis of normality is reported, in addition to the transformed skewness and kurtosis of the rotated components.

  4. Vector heteroscedasticity test (using squares)

    This test amounts to a multivariate regression of all error variances and covariances on the original regressors and their squares. The test is χ2(sn(n+1)/2), where s is the number of non-redundant added regressors (collinear regressors are automatically removed). The null hypothesis is no heteroscedasticity, which would be rejected if the test statistic is too high. Note that regressors that were classified as unrestricted are excluded.

  5. Vector heteroscedasticity test (using squares and cross-products)

    This test is similar to the heteroscedasticity test, but now cross-products of regressors are added as well. Again, the null hypothesis is no heteroscedasticity (the name functional form was used in version 8 of PcGive).

15.10.4 Testing for general restrictions

Writing θ̂ = vecΠ̂ ', with corresponding variance-covariance matrix V[ θ̂ ] , we can test for (non-) linear restrictions of the form:

f( θ) =0.

The null hypothesis H0: f(θ)=0 will be tested against H1: f(θ)≠0 through a Wald test:

w=f( θ̂ ) '( V[ θ̂ ] ̃') -1f( θ̂ )

where J is the Jacobian matrix of the transformation: J=∂f(θ)/∂θ'. PcGive computes by numerical differentiation. The statistic w has a χ2(s) distribution, where s is the number of restrictions (that is, equations in f(.)). The null hypothesis is rejected if we observe a significant test statistic.

Output consists of:

  1. Wald test for general restrictions, this is the statistic w with its p-value;

  2. *Restricted variance, the matrix V[ θ̂ ] ̃'.

15.11 Progress

PcGive can be used in two ways: for general-to-specific modelling, and for unordered searches.

In the general-to-specific approach:

  1. Begin with the dynamic system formulation.

  2. Check its data coherence and cointegration.

  3. Map the system to I(0) after cointegration analysis.

  4. Transform to a set of variables with low intercorrelations, but interpretable parameters.

  5. Check the validity of the system by thorough testing.

  6. Move to the dynamic model formulation.

  7. Delete unwanted regressors to obtain a parsimonious model.

  8. Check the validity of the model by thorough testing, particularly parsimonious encompassing.

Nothing commends unordered searches:

  1. No control is offered over the significance level of testing.

  2. A `later' reject outcome invalidates all earlier ones.

  3. Until a model adequately characterizes the data, standard tests are invalid.

  4. If the system displays symptoms of mis-specification, there is little point in imposing further restrictions on it.

PcGive does not enforce a general-to-simple modelling strategy, but it will automatically monitor the progress of the sequential reduction from the general to the specific, and will provide the associated likelihood-ratio tests.

More precisely, the program will record a sequence of systems, and for the most recent system the sequence of models (which could be empty). The program gives a list of the selected systems and models, reporting the estimation method, sample size (T), number of coefficients (k), the log-likelihood (Kc- 2/T log | Ω̂ | ). Three information criteria are also reported: the Schwarz criterion, the Hannan--Quinn criterion and the Akaike criterion, see §15.4.6.

Following this, PcGive will report the F-tests (based on Rao's F-approximation) indicating the progress in system modelling, as well as likelihood-ratio tests (χ2) of the progress in modelling that system (tests of over-identifying restrictions).

Chapter 16 Cointegrated VAR

Following on from §15.9.1, it is possible to estimate a cointegrated VAR which has a reduced rank long-run matrix P0=αβ', or, possibly, additional restrictions on α or β.

Following estimation of a cointegrated VAR, most evaluation facilities of the unrestricted system are available, but with the πi in (eq:15.1) replaced with the π̂i from the restricted VAR. Note that this is different from version 9 of PcFiml, where evaluation was still based on the unrestricted system.

All β̂ and α̂ below relate to the restricted estimates, according to the selected rank p, and any additional restrictions imposed in the cointegrated VAR estimation (note that it is possible to impose no restrictions at all).

16.0.1 Cointegration restrictions

Within a cointegration VAR analysis, restrictions on α and β can be imposed:

  1. general (non-linear) restrictions on α and β';

  2. restricted α: αr=Aθ;

  3. restricted β: βr=Hφ;

  4. known β: βr=[H:φ];

  5. (1) and (2) jointly;

  6. (1) and (3) jointly.

PcGive requires you to choose the rank p. For (1)--(5), the restrictions are expressed through the A and/or H matrix. The general restrictions of (6) are expressed directly in terms of the elements of α and β'.

16.1 Cointegrated VAR output

Output of the cointegration estimation is:

  1. β̂ ;

  2. Standard errors of beta but only if the restricted β̂ is identified;

  3. α̂ ;

  4. Standard errors of alpha;

  5. Long-run matrix 0=α̂ β̂ ', rank p;

  6. Standard errors of 0;

  7. Reduced form beta

    Partition β̂ ' as:

    β̂ 11' β̂ 12'
    β̂ 21' β̂ 22'

    where β̂ 11' is the top left ( p×p) block of β̂ ', then when β̂ 11' is non-singular, the reduced form matrix is:

    -( β̂ 11') -1β̂ 12'.

  8. Moving-average impact matrix.

This is followed by:

  1. the log-likelihood (eq:15.13), -T/2 log | Ω̂ | ;

  2. T, the number of observations used in the estimation, and the number of parameters in all equations;

  3. rank of long-run matrix p;
  4. number of long-run restrictions in excess of the reduced rank restriction;
  5. a message whether β is identified or not;
  6. a ξ2 test for over-identifying restrictions if any have been imposed.

16.2 Graphic analysis

Some graphs additional to those listed in §15.5 are available for a cointegrated VAR.

For generality, assume that the variables zr were restricted to lie in the cointegrating space. Let α0, β0' denote the original standardized loadings and eigenvectors; αr, βr' are obtained after imposing further restrictions on the cointegrating space. In the unrestricted graphs, the analysis proceeds as if no rank has been chosen yet, corresponding to n eigenvectors. The restricted analysis requires selection of p, the rank of the cointegrating space, thus resulting in fewer graphs.

Write (y;z) for (y';z'), and let (yt;zr) denote the original levels of the endogenous variables and the variables restricted to lie in the cointegrating space; r1t=(t-1;r) are the residuals from regressing (yt-1;zr) on the short-run dynamics ({Δyt-i}) and unrestricted variables (zu). For all graphs there are two variants:

  1. Use (Y:Z)

    This uses (yt;zr).

  2. Use (Y_1:Z) with lagged DY and U removed

    This uses r1t.

The available graphs are:

  1. Cointegration relations

    β̂ 0'(yt;zr), or β̂ 0'r1t. Write the standardized ith eigenvector as ( β1...βn βn+1 ...βn+qr) ', standardized so that βi=1. The ith cointegration relation graph is: ∑jβjyjt+∑kβkzkt and using concentrated components: ∑jβjjt-1+∑kβkkt.

  2. Actual and fitted

    The graphs of the cointegrating relations are split into two components: the actuals yt and the fitted values yt-β̂ 0'(yt;zr). All lines are graphed in deviation from mean. Alternatively: the t-1 and the fitted values t-1-β̂ 0'r1t, in deviation from mean. Considering the ith graph of actual and fitted, using the above notation for the standardized ith eigenvector: yit and yit-∑jβjyjt-∑kβkzkt=-∑j≠iβjyjt-∑kβkzkt whereas using concentrated components: yit and -∑j≠iβjjt-1-∑kβkkt.

  3. Components of relation

    Graphs all the components of β̂ 0'(yt;zr) or β̂ 0'r1t, in deviations from their means. For the ith graph: yitjyjt( j≠i) ,βkzkt all in deviation from their means. Using concentrated components: y̌itjjt-1( j≠i) ,βkkt also in deviation from means.

16.3 Recursive graphics

Recursive graphics is available when the cointegrated VAR is estimated recursively. Unrestricted variables and short-run dynamics can be fixed at their full-sample coefficients, or partialled out at each sample size.

The types of graphs are:

  1. Eigenvalues λ̂ it. These are only available if no additional restrictions have been imposed.

  2. Log-likelihood/T, see (eq:15.22).

    Only available if additional restrictions was used (but this can be used without specifying any code, i.e. without any restrictions).

  3. Test for restrictions

    The χ2 test for the restrictions; its critical value is also shown. Only available if long-run restrictions were imposed. The p-value can be set.

  4. Beta coefficients

    Recursive βs. Only available if β is identified.

Chapter 17 Simultaneous Equations Model

Once a statistical system has been adequately modelled and its congruency satisfactorily evaluated, an economically meaningful structural interpretation can be sought. The relevant class of model has the form:

Byt+Cwt=ut, ut~INn[ 0,Σ] , t=1,...,T.

The diagonal of B is normalized at unity. More concisely:


with A=( B:C) and X=( Y:W) . PcGive accepts only linear, within-equation restrictions on the elements of A for the initial specification of the identified model, but allows for further non-linear restrictions on the parameters (possibly across equations). The order condition for identification is enforced, and the rank condition is required to be satisfied for arbitrary (random) non-zero values of the parameters.

A subset of the equations can be identities, but otherwise Σ is assumed to be positive definite and unrestricted. When identities are present, the model to be estimated is written as:

) X'=(

where A1X'=U' is the subset of n1 stochastic equations and A2X'=0 is the subset of n2 identities with n1+n2=n. PcGive requires specification of the variables involved in the identities, but will derive the coefficients A2.

17.1 Model estimation

Let φ denote the vector of unrestricted elements of vec(A1'): φ=A1vu. Then l( φ) is to be minimized as an unrestricted function of the elements of φ. On convergence, we have the maximum likelihood estimator (MLE) of φ:

φ̂ = argmax φΦl( φ)

and so have the MLE of A1; as all other elements of A2 are known, we have the MLE of A. If convergence does not occur, reset the parameter values, and use a looser (larger) convergence criterion to obtain output.

The estimated variance of ut is:

Σ̃ =


which is ( n1×n1) . There is a degrees-of-freedom correction c, which equals the average number of parameters per equation (rounded towards 0); this would be k for the system.

From A, we can derive the MLE of the restricted reduced form:

Π̂ =--1

and hence the estimated variances of the elements of φ̂ :

V[ φ̂ ] ̃={( Σ̃ -1W'W') u}-1

where, before inversion, we choose the rows and columns of the right-hand side corresponding to unrestricted elements of A1 only, and Q'=( Π':I) .

The covariance matrix of the restricted reduced form residuals is obtained by writing:

B11 B12
B21 B22
) ,

where B11 is ( n1×n1) . Then:

) Σ( B11':B21') ,


Ω̃ 11=11Σ̃ B̂11'

corresponding to the stochastic equations. The estimated variance matrix of the restricted reduced form coefficients is:

V[ vecΠ̂ '] ̃=V[ φ̂ ] ̃' where J=-( B-1⊗( Π':I) ) u.

17.2 Model output

Model estimation follows the successful estimation of the unrestricted reduced form. The sample period and number of forecasts carry over from the system.

The following information is needed to estimate a model:

  1. The model formulation.

  2. The method of estimation:
     Full information maximum likelihood (FIML);

     Three stage least squares (3SLS);

     Two stage least squares (2SLS);

     Single equation OLS (1SLS);

     Constrained FIML (CFIML)

    is available when selecting constrained simultaneous equations model.

  3. The number of observations to be used to initialize the recursive estimation (FIML and CFIML).

  4. For CFIML: the code specifying the parameter constraints.

All model estimation methods in PcGive are derived from the estimator-generating equation (EGE). We require the reduced form to be a congruent data model, for which the structural specification is a more parsimonious representation.

The model output coincides to a large extent with the system output. In the following we only note some differences:

  1. Identities

    Gives the coefficients of the n2 identity equations, together with the R2 of each equation, which should be 1 (values ≥.99 are accepted).

  2. Structural coefficients and standard errors, φ̂ and √( V[ φ̂ ] ̃) ii, given for all n1 equations.

  3. t-value and t-probability

    The t-probabilities are based on a Student t-distribution with T-c degrees of freedom. The correction c is defined below equation (eq:17.5).

  4. Equation standard error (σ)

    The square root of the structural residual variance for each equation:

    (Σ̃ ii)½ for i=1,...,n1.

  5. Likelihood

    The log-likelihood value is (including the constant Kc):

    l̂ =Kc- T/2 log |Σ̃ | +T log | ||| = - Tn/2 (1 + log 2π)- T/2 log |Ω̂ 11|.

    Reported are l̂ , -T/2 log |Ω̂ 11|, |Ω̂ 11| and the sample size T.

  6. LR test of over-identifying restrictions

    This tests whether the model is a valid reduction of the system.

  7. *Reduced form estimates, consisting of:
    1. Reduced form coefficients;

    2. Reduced form coefficient standard errors;

    3. Reduced form equation standard errors.

  8. *Heteroscedastic-consistent standard errors

    HCSE for short; computed for FIML only, but not for unrestricted variables. These provide consistent estimates of the regression coefficients' standard errors even if the residuals are heteroscedastic in an unknown way. Large differences between the HCSE and SE are indicative of the presence of heteroscedasticity, in which case the HCSE provides the more useful measure of the standard errors (see White, 1980). They are computed as: Q-1IQ-1,  Q=V[φ̂ ],  I=∑t=1Tqtqt', the outer product of the gradients.

17.3 Graphic analysis

Graphic analysis focuses on graphical inspection of individual restricted reduced form equations. Let yt, ŷt denote respectively the actual (that is, observed) values and the fitted values of the selected equation, with RRF residuals v̂t = yt-ŷt, t=1,...,T. If H observations are used for forecasting, then ŷT+1,...,ŷT+H are the 1-step forecasts.

Except for substituting the (restricted) reduced form residuals, graphic analysis follows the unrestricted system, see §16.2.

17.4 Recursive graphics

When recursive FIML or CFIML is selected, the φ and Σ matrices are estimated at each t  ( k≤M≤t≤T) where M is user selected. For each t, the RRF can be derived from this.

The recursive graphics options follow §15.6, with the addition of the tests for over-identifying restrictions.

Let l̂ t be the log-likelihood of the URF, and l̂ 0,t the log-likelihood of the RRF. The tests for over-identifying restrictions, 2( l̂ t-l̂ 0,t) , can be graphed with a line graphing the critical value from the χ2(s) distribution (s is the number of restrictions) at a chosen significance level.

17.5 Dynamic analysis, forecasting and simulation

These proceed as for the system, but based on the restricted reduced form. Graphs are available for identity equations.

Impulse response analysis maps the dynamics of the endogenous variables through the restricted reduced form. The initial values i1,j for the jth set of graphs can be chosen as follows:

  1. unity


  2. standard error

    i1,j=-B-1ejσ̃ jj,

    where σ̃ jj is the jth diagonal element of Σ̃ .

  3. orthogonalized

    i1,j=-B-1 (


    where pj is the jth column of the Choleski decomposition of Σ̃ , and it is padded with zeros for identity equations. As in the system, the outcome depends on the ordering of the variables.

  4. custum


    where vj is specified by the user.

17.6 Model testing

The vector error autocorrelation test partials lagged structural residuals out from the original regressors, and re-estimates the model. All other tests take the residuals from the RRF, and operate as for the system.

Note, however, that application of single-equation autocorrelation and heteroscedasticity tests in a model will lead to all reduced-form variables being used in the auxiliary regression. If the model is an invalid reduction of the system, this may cause the tests to be significant. Equally, valid reduction combined with small amounts of system residual autocorrelation could induce significant single-equation model autocorrelation. The usual difficulty of interpreting significant test outcomes is prominent here.

A similar feature operates for the vector heteroscedasticity tests, where all reduced-form variables (but not those classified as unrestricted) are used in the auxiliary regression.


Anderson, T. W. (1984). An Introduction to Multivariate Statistical Analysis, (2nd ed.). New York: John Wiley & Sons.

Banerjee, A., J. J. Dolado, J. W. Galbraith, and D. F. Hendry (1993). Co-integration, Error Correction and the Econometric Analysis of Non-Stationary Data. Oxford: Oxford University Press.

Banerjee, A. and D. F. Hendry (Eds.) (1992). Testing Integration and Cointegration. Oxford Bulletin of Economics and Statistics: 54.

Bårdsen, G. (1989). The estimation of long run coefficients from error correction models. Oxford Bulletin of Economics and Statistics 50.

Berndt, E. K., B. H. Hall, R. E. Hall, and J. A. Hausman (1974). Estimation and inference in nonlinear structural models. Annals of Economic and Social Measurement 3, 653--665.

Boswijk, H. P. (1992). Cointegration, Identification and Exogeneity, Volume 37 of Tinbergen Institute Research Series. Amsterdam: Thesis Publishers.

Boswijk, H. P. (1995). Identifiability of cointegrated systems. Discussion paper ti 7-95-078, Tinbergen Institute, University of Amsterdam.

Boswijk, H. P. and J. A. Doornik (2004). Identifying, estimating and testing restricted cointegrated systems: An overview. Statistica Neerlandica 58, 440--465.

Bowman, K. O. and L. R. Shenton (1975). Omnibus test contours for departures from normality based on √b1 and b2. Biometrika 62, 243--250.

Box, G. E. P. and D. A. Pierce (1970). Distribution of residual autocorrelations in autoregressive-integrated moving average time series models. Journal of the American Statistical Association 65, 1509--1526.

Britton, E., P. Fisher, and J. Whitley (1998). Inflation Report projections: Understanding the fan chart. Bank of England Quarterly Bulletin 38, 30--37.

Brown, R. L., J. Durbin, and J. M. Evans (1975). Techniques for testing the constancy of regression relationships over time (with discussion). Journal of the Royal Statistical Society B 37, 149--192.

Calzolari, G. (1987). Forecast variance in dynamic simulation of simultaneous equations models. Econometrica 55, 1473--1476.

Campbell, J. Y. and P. Perron (1991). Pitfalls and opportunities: What macroeconomists should know about unit roots. In O. J. Blanchard and S. Fischer (Eds.), NBER Macroeconomics annual 1991. Cambridge, MA: MIT press.

Chong, Y. Y. and D. F. Hendry (1986). Econometric evaluation of linear macro-economic models. Review of Economic Studies 53, 671--690. Reprinted in Granger, C. W. J. (ed.) (1990), Modelling Economic Series. Oxford: Clarendon Press; and in Campos, J., Ericsson, N.R. and Hendry, D.F. (eds.), General to Specific Modelling. Edward Elgar, 2005.

Chow, G. C. (1960). Tests of equality between sets of coefficients in two linear regressions. Econometrica 28, 591--605.

Clements, M. P. and D. F. Hendry (1994). Towards a theory of economic forecasting. See Harg94, pp. 9--52.

Clements, M. P. and D. F. Hendry (1998a). Forecasting Economic Time Series. Cambridge: Cambridge University Press.

Clements, M. P. and D. F. Hendry (1998b). Forecasting Economic Time Series: The Marshall Lectures on Economic Forecasting. Cambridge: Cambridge University Press.

Clements, M. P. and D. F. Hendry (1999). Forecasting Non-stationary Economic Time Series. Cambridge, Mass.: MIT Press.

Coyle, D. (2001). Making sense of published economic forecasts. In D. F. Hendry and N. R. Ericsson (Eds.), Understanding Economic Forecasts, pp. 54--67. Cambridge, Mass.: MIT Press.

Cramer, J. S. (1986). Econometric Applications of Maximum Likelihood Methods. Cambridge: Cambridge University Press.

Cran, G. W., K. J. Martin, and G. E. Thomas (1977). A remark on algorithms. AS 63: The incomplete beta integral. AS 64: Inverse of the incomplete beta function ratio. Applied Statistics 26, 111--112.

D'Agostino, R. B. (1970). Transformation to normality of the null distribution of g1. Biometrika 57, 679--681.

Davidson, J. E. H., D. F. Hendry, F. Srba, and J. S. Yeo (1978). Econometric modelling of the aggregate time-series relationship between consumers' expenditure and income in the United Kingdom. Economic Journal 88, 661--692. Reprinted in Hendry, D. F., Econometrics: Alchemy or Science? Oxford: Blackwell Publishers, 1993, and Oxford University Press, 2000; and in Campos, J., Ericsson, N.R. and Hendry, D.F. (eds.), General to Specific Modelling. Edward Elgar, 2005.

Davidson, R. and J. G. MacKinnon (1993). Estimation and Inference in Econometrics. New York: Oxford University Press.

Dhrymes, P. J. (1984). Mathematics for Econometrics, (2nd ed.). New York: Springer-Verlag.

Doornik, J. A. (1995a). Econometric Computing. Oxford: University of Oxford. Ph.D Thesis.

Doornik, J. A. (1995b). Testing general restrictions on the cointegrating space. www.doornik.com, Nuffield College.

Doornik, J. A. (1996). Testing vector autocorrelation and heteroscedasticity in dynamic models. www.doornik.com, Nuffield College.

Doornik, J. A. (1998). Approximations to the asymptotic distribution of cointegration tests. Journal of Economic Surveys 12, 573--593. Reprinted in M. McAleer and L. Oxley (1999). Practical Issues in Cointegration Analysis. Oxford: Blackwell Publishers.

Doornik, J. A. (2013). Object-Oriented Matrix Programming using Ox (7th ed.). London: Timberlake Consultants Press.

Doornik, J. A. and H. Hansen (1994). A practical test for univariate and multivariate normality. Discussion paper, Nuffield College.

Doornik, J. A. and D. F. Hendry (1992). PCGIVE 7: An Interactive Econometric Modelling System. Oxford: Institute of Economics and Statistics, University of Oxford.

Doornik, J. A. and D. F. Hendry (1994). PcGive 8: An Interactive Econometric Modelling System. London: International Thomson Publishing, and Belmont, CA: Duxbury Press.

Doornik, J. A. and D. F. Hendry (2013). OxMetrics: An Interface to Empirical Modelling (7th ed.). London: Timberlake Consultants Press.

Doornik, J. A., D. F. Hendry, and B. Nielsen (1998). Inference in cointegrated models: UK M1 revisited. Journal of Economic Surveys 12, 533--572. Reprinted in M. McAleer and L. Oxley (1999). Practical Issues in Cointegration Analysis. Oxford: Blackwell Publishers.

Doornik, J. A. and R. J. O'Brien (2002). Numerically stable cointegration analysis. Computational Statistics &Data Analysis 41, 185--193.

Durbin, J. (1988). Maximum likelihood estimation of the parameters of a system of simultaneous regression equations. Econometric Theory 4, 159--170. Paper presented to the Copenhagen Meeting of the Econometric Society, 1963.

Engle, R. F. (1982). Autoregressive conditional heteroscedasticity, with estimates of the variance of United Kingdom inflation. Econometrica 50, 987--1007.

Engle, R. F. and C. W. J. Granger (1987). Cointegration and error correction: Representation, estimation and testing. Econometrica 55, 251--276.

Engle, R. F. and D. F. Hendry (1993). Testing super exogeneity and invariance in regression models. Journal of Econometrics 56, 119--139. Reprinted in Ericsson, N. R. and Irons, J. S. (eds.) Testing Exogeneity, Oxford: Oxford University Press, 1994.

Engle, R. F., D. F. Hendry, and J.-F. Richard (1983). Exogeneity. Econometrica 51, 277--304. Reprinted in Hendry, D. F., Econometrics: Alchemy or Science? Oxford: Blackwell Publishers, 1993, and Oxford University Press, 2000; in Ericsson, N. R. and Irons, J. S. (eds.) Testing Exogeneity, Oxford: Oxford University Press, 1994; and in Campos, J., Ericsson, N.R. and Hendry, D.F. (eds.), General to Specific Modelling. Edward Elgar, 2005.

Engle, R. F., D. F. Hendry, and D. Trumbull (1985). Small sample properties of ARCH estimators and tests. Canadian Journal of Economics 43, 66--93.

Ericsson, N. R., D. F. Hendry, and G. E. Mizon (1996). Econometric issues in economic policy analysis. Mimeo, Nuffield College, University of Oxford.

Ericsson, N. R., D. F. Hendry, and H.-A. Tran (1994). Cointegration, seasonality, encompassing and the demand for money in the United Kingdom. See Harg94, pp. 179--224.

Ericsson, N. R. (1992). Cointegration, exogeneity and policy analysis. Journal of Policy Modeling 14. Special Issue.

Favero, C. and D. F. Hendry (1992). Testing the Lucas critique: A review. Econometric Reviews 11, 265--306.

Fletcher, R. (1987). Practical Methods of Optimization, (2nd ed.). New York: John Wiley & Sons.

Gill, P. E., W. Murray, and M. H. Wright (1981). Practical Optimization. New York: Academic Press.

Godfrey, L. G. (1988). Misspecification Tests in Econometrics. Cambridge: Cambridge University Press.

Goldfeld, S. M. and R. E. Quandt (1972). Non-linear Methods in Econometrics. Amsterdam: North-Holland.

Granger, C. W. J. (1969). Investigating causal relations by econometric models and cross-spectral methods. Econometrica 37, 424--438.

Haavelmo, T. (1943). The statistical implications of a system of simultaneous equations. Econometrica 11, 1--12.

Haavelmo, T. (1944). The probability approach in econometrics. Econometrica 12, 1--118. Supplement.

Hansen, H. and S. Johansen (1992). Recursive estimation in cointegrated VAR-models. Discussion paper, Institute of Mathematical Statistics, University of Copenhagen.

Hargreaves, C. (Ed.) (1994). Non-stationary Time-series Analysis and Cointegration. Oxford: Oxford University Press.

Harvey, A. C. (1990). The Econometric Analysis of Time Series, (2nd ed.). Hemel Hempstead: Philip Allan.

Hendry, D. F. (1971). Maximum likelihood estimation of systems of simultaneous regression equations with errors generated by a vector autoregressive process. International Economic Review 12, 257--272. Correction in 15, p.260.

Hendry, D. F. (1976). The structure of simultaneous equations estimators. Journal of Econometrics 4, 51--88. Reprinted in Hendry, D. F., Econometrics: Alchemy or Science? Oxford: Blackwell Publishers, 1993, and Oxford University Press, 2000.

Hendry, D. F. (1979). Predictive failure and econometric modelling in macro-economics: The transactions demand for money. In P. Ormerod (Ed.), Economic Modelling, pp. 217--242. London: Heinemann. Reprinted in Hendry, D. F., Econometrics: Alchemy or Science? Oxford: Blackwell Publishers, 1993, and Oxford University Press, 2000; and in Campos, J., Ericsson, N.R. and Hendry, D.F. (eds.), General to Specific Modelling. Edward Elgar, 2005.

Hendry, D. F. (1986). Using PC-GIVE in econometrics teaching. Oxford Bulletin of Economics and Statistics 48, 87--98.

Hendry, D. F. (1987). Econometric methodology: A personal perspective. In T. F. Bewley (Ed.), Advances in Econometrics, pp. 29--48. Cambridge: Cambridge University Press. Reprinted in Campos, J., Ericsson, N.R. and Hendry, D.F. (eds.), General to Specific Modelling. Edward Elgar, 2005.

Hendry, D. F. (1988). The encompassing implications of feedback versus feedforward mechanisms in econometrics. Oxford Economic Papers 40, 132--149. Reprinted in Ericsson, N. R. and Irons, J. S. (eds.) Testing Exogeneity, Oxford: Oxford University Press, 1994; and in Campos, J., Ericsson, N.R. and Hendry, D.F. (eds.), General to Specific Modelling. Edward Elgar, 2005.

Hendry, D. F. (1993). Econometrics: Alchemy or Science? Oxford: Blackwell Publishers.

Hendry, D. F. (1995). Dynamic Econometrics. Oxford: Oxford University Press.

Hendry, D. F. (2001). Modelling UK inflation, 1875--1991. Journal of Applied Econometrics 16, 255--275.

Hendry, D. F. and J. A. Doornik (1994). Modelling linear dynamic econometric systems. Scottish Journal of Political Economy 41, 1--33.

Hendry, D. F. and J. A. Doornik (2013). Empirical Econometric Modelling using PcGive: Volume I (7th ed.). London: Timberlake Consultants Press.

Hendry, D. F. and K. Juselius (2001). Explaining cointegration analysis: Part II. Energy Journal 22, 75--120.

Hendry, D. F. and H.-M. Krolzig (2003). New developments in automatic general-to-specific modelling. In B. P. Stigum (Ed.), Econometrics and the Philosophy of Economics, pp. 379--419. Princeton: Princeton University Press.

Hendry, D. F. and G. E. Mizon (1993). Evaluating dynamic econometric models by encompassing the VAR. In P. C. B. Phillips (Ed.), Models, Methods and Applications of Econometrics, pp. 272--300. Oxford: Basil Blackwell. Reprinted in Campos, J., Ericsson, N.R. and Hendry, D.F. (eds.), General to Specific Modelling. Edward Elgar, 2005.

Hendry, D. F. and M. S. Morgan (1995). The Foundations of Econometric Analysis. Cambridge: Cambridge University Press.

Hendry, D. F. and A. J. Neale (1991). A Monte Carlo study of the effects of structural breaks on tests for unit roots. In P. Hackl and A. H. Westlund (Eds.), Economic Structural Change, Analysis and Forecasting, pp. 95--119. Berlin: Springer-Verlag.

Hendry, D. F., A. J. Neale, and F. Srba (1988). Econometric analysis of small linear systems using Pc-Fiml. Journal of Econometrics 38, 203--226.

Hendry, D. F., A. R. Pagan, and J. D. Sargan (1984). Dynamic specification. In Z. Griliches and M. D. Intriligator (Eds.), Handbook of Econometrics, Volume 2, pp. 1023--1100. Amsterdam: North-Holland. Reprinted in Hendry, D. F., Econometrics: Alchemy or Science? Oxford: Blackwell Publishers, 1993, and Oxford University Press, 2000; and in Campos, J., Ericsson, N.R. and Hendry, D.F. (eds.), General to Specific Modelling. Edward Elgar, 2005.

Hendry, D. F. and J.-F. Richard (1982). On the formulation of empirical models in dynamic econometrics. Journal of Econometrics 20, 3--33. Reprinted in Granger, C. W. J. (ed.) (1990), Modelling Economic Series. Oxford: Clarendon Press and in Hendry D. F., Econometrics: Alchemy or Science? Oxford: Blackwell Publishers 1993, and Oxford University Press, 2000; and in Campos, J., Ericsson, N.R. and Hendry, D.F. (eds.), General to Specific Modelling. Edward Elgar, 2005.

Hendry, D. F. and J.-F. Richard (1983). The econometric analysis of economic time series (with discussion). International Statistical Review 51, 111--163. Reprinted in Hendry, D. F., Econometrics: Alchemy or Science? Oxford: Blackwell Publishers, 1993, and Oxford University Press, 2000.

Hendry, D. F. and J.-F. Richard (1989). Recent developments in the theory of encompassing. In B. Cornet and H. Tulkens (Eds.), Contributions to Operations Research and Economics. The XXth Anniversary of CORE, pp. 393--440. Cambridge, MA: MIT Press. Reprinted in Campos, J., Ericsson, N.R. and Hendry, D.F. (eds.), General to Specific Modelling. Edward Elgar, 2005.

Hendry, D. F. and F. Srba (1980). AUTOREG: A computer program library for dynamic econometric models with autoregressive errors. Journal of Econometrics 12, 85--102. Reprinted in Hendry, D. F., Econometrics: Alchemy or Science? Oxford: Blackwell Publishers, 1993, and Oxford University Press, 2000.

Hosking, J. R. M. (1980). The multivariate portmanteau statistic. Journal of the American Statistical Association 75, 602--608.

Hunter, J. (1992). Cointegrating exogeneity. Economics Letters 34, 33--35.

Johansen, S. (1988). Statistical analysis of cointegration vectors. Journal of Economic Dynamics and Control 12, 231--254. Reprinted in R.F. Engle and C.W.J. Granger (eds), Long-Run Economic Relationships, Oxford: Oxford University Press, 1991, 131--52.

Johansen, S. (1991). Estimation and hypothesis testing of cointegration vectors in Gaussian vector autoregressive models. Econometrica 59, 1551--1580.

Johansen, S. (1992a). Cointegration in partial systems and the efficiency of single-equation analysis. Journal of Econometrics 52, 389--402.

Johansen, S. (1992b). Testing weak exogeneity and the order of cointegration in UK money demand. Journal of Policy Modeling 14, 313--334.

Johansen, S. (1994). The role of the constant and linear terms in cointegration analysis of nonstationary variables. Econometric Reviews 13, 205--229.

Johansen, S. (1995a). Identifying restrictions of linear equations with applications to simultaneous equations and cointegration. Journal of Econometrics 69, 111--132.

Johansen, S. (1995b). Likelihood-based Inference in Cointegrated Vector Autoregressive Models. Oxford: Oxford University Press.

Johansen, S. (1995c). A statistical analysis of cointegration for I(2) variables. Econometric Theory 11, 25--59.

Johansen, S. and K. Juselius (1990). Maximum likelihood estimation and inference on cointegration -- With application to the demand for money. Oxford Bulletin of Economics and Statistics 52, 169--210.

Johansen, S. and K. Juselius (1992). Testing structural hypotheses in a multivariate cointegration analysis of the PPP and the UIP for UK. Journal of Econometrics 53, 211--244.

Johansen, S. and K. Juselius (1994). Identification of the long-run and the short-run structure. An application to the ISLM model. Journal of Econometrics 63, 7--36.

Judge, G. G., W. E. Griffiths, R. C. Hill, H. Lütkepohl, and T.-C. Lee (1985). The Theory and Practice of Econometrics, (2nd ed.). New York: John Wiley.

Kelejian, H. H. (1982). An extension of a standard test for heteroskedasticity to a systems framework. Journal of Econometrics 20, 325--333.

Kiefer, N. M. (1989). The ET interview: Arthur S. Goldberger. Econometric Theory 5, 133--160.

Kiviet, J. F. (1986). On the rigor of some mis-specification tests for modelling dynamic relationships. Review of Economic Studies 53, 241--261.

Kiviet, J. F. and G. D. A. Phillips (1992). Exact similar tests for unit roots and cointegration. Oxford Bulletin of Economics and Statistics 54, 349--367.

Koopmans, T. C. (Ed.) (1950). Statistical Inference in Dynamic Economic Models. Number 10 in Cowles Commission Monograph. New York: John Wiley & Sons.

Ljung, G. M. and G. E. P. Box (1978). On a measure of lack of fit in time series models. Biometrika 65, 297--303.

Longley, G. M. (1967). An appraisal of least-squares for the electronic computer from the point of view of the user. Journal of the American Statistical Association 62, 819--841.

Lütkepohl, H. (1991). Introduction to Multiple Time Series Analysis. New York: Springer-Verlag.

Magnus, J. R. and H. Neudecker (1988). Matrix Differential Calculus with Applications in Statistics and Econometrics. New York: John Wiley & Sons.

Majunder, K. L. and G. P. Bhattacharjee (1973a). Algorithm AS 63. The incomplete beta integral. Applied Statistics 22, 409--411.

Majunder, K. L. and G. P. Bhattacharjee (1973b). Algorithm AS 64. Inverse of the incomplete beta function ratio. Applied Statistics 22, 411--414.

Makridakis, S., S. C. Wheelwright, and R. C. Hyndman (1998). Forecasting: Methods and Applications (3rd ed.). New York: John Wiley and Sons.

Mizon, G. E. (1977). Model selection procedures. In M. J. Artis and A. R. Nobay (Eds.), Studies in Modern Economic Analysis, pp. 97--120. Oxford: Basil Blackwell.

Mizon, G. E. and J.-F. Richard (1986). The encompassing principle and its application to non-nested hypothesis tests. Econometrica 54, 657--678.

Molinas, C. (1986). A note on spurious regressions with integrated moving average errors. Oxford Bulletin of Economics and Statistics 48, 279--282.

Mosconi, R. and C. Giannini (1992). Non-causality in cointegrated systems: Representation, estimation and testing. Oxford Bulletin of Economics and Statistics 54, 399--417.

Ooms, M. (1994). Empirical Vector Autoregressive Modeling. Berlin: Springer-Verlag.

Osterwald-Lenum, M. (1992). A note with quantiles of the asymptotic distribution of the ML cointegration rank test statistics. Oxford Bulletin of Economics and Statistics 54, 461--472.

Pagan, A. R. (1987). Three econometric methodologies: A critical appraisal. Journal of Economic Surveys 1, 3--24. Reprinted in Granger, C. W. J. (ed.) (1990), Modelling Economic Series. Oxford: Clarendon Press.

Pagan, A. R. (1989). On the role of simulation in the statistical evaluation of econometric models. Journal of Econometrics 40, 125--139.

Paruolo, P. (1996). On the determination of integration indices in I(2) systems. Journal of Econometrics 72, 313--356.

Pesaran, M. H., R. P. Smith, and J. S. Yeo (1985). Testing for structural stability and predictive failure: A review. Manchester School 3, 280--295.

Phillips, P. C. B. (1986). Understanding spurious regressions in econometrics. Journal of Econometrics 33, 311--340.

Phillips, P. C. B. (1991). Optimal inference in cointegrated systems. Econometrica 59, 283--306.

Pike, M. C. and I. D. Hill (1966). Logarithm of the gamma function. Communications of the ACM 9, 684.

Quandt, R. E. (1983). Computational methods and problems. In Z. Griliches and M. D. Intriligator (Eds.), Handbook of Econometrics, Volume 1, Chapter 12. Amsterdam: North-Holland.

Rahbek, A., H. C. Kongsted, and C. Jørgensen (1999). Trend-stationarity in the I(2) cointegration model. Journal of Econometrics 90, 265--289.

Rao, C. R. (1952). Advanced Statistical Methods in Biometric Research. New York: John Wiley.

Rao, C. R. (1973). Linear Statistical Inference and its Applications, (2nd ed.). New York: John Wiley & Sons.

Richard, J.-F. (1984). Classical and Bayesian inference in incomplete simultaneous equation models. In D. F. Hendry and K. F. Wallis (Eds.), Econometrics and Quantitative Economics. Oxford: Basil Blackwell.

Salkever, D. S. (1976). The use of dummy variables to compute predictions, prediction errors and confidence intervals. Journal of Econometrics 4, 393--397.

Schmidt, P. (1974). The asymptotic distribution of forecasts in the dynamic simulation of an econometric model. Econometrica 42, 303--309.

Shea, B. L. (1988). Algorithm AS 239: Chi-squared and incomplete gamma integral. Applied Statistics 37, 466--473.

Shenton, L. R. and K. O. Bowman (1977). A bivariate model for the distribution of √b1 and b2. Journal of the American Statistical Association 72, 206--211.

Sims, C. A. (1980). Macroeconomics and reality. Econometrica 48, 1--48. Reprinted in Granger, C. W. J. (ed.) (1990), Modelling Economic Series. Oxford: Clarendon Press.

Spanos, A. (1986). Statistical Foundations of Econometric Modelling. Cambridge: Cambridge University Press.

Spanos, A. (1989). On re-reading Haavelmo: A retrospective view of econometric modeling. Econometric Theory 5, 405--429.

Thisted, R. A. (1988). Elements of Statistical Computing. Numerical Computation. New York: Chapman and Hall.

Toda, H. Y. and P. C. B. Phillips (1993). Vector autoregressions and causality. Econometrica 61, 1367--1393.

White, H. (1980). A heteroskedastic-consistent covariance matrix estimator and a direct test for heteroskedasticity. Econometrica 48, 817--838.

Wooldridge, J. M. (1999). Asymptotic properties of some specification tests in linear models with integrated processes. In R. F. Engle and H. White (Eds.), Cointegration, Causality and Forecasting, pp. 366--384. Oxford: Oxford University Press.