These reference chapters have been taken from Volume II, and use the same chapter and section numbering as the printed version.
This part explains the statistics computed and reported by PcGive for dynamic systems (this chapter), cointegration tests (§15.9.1), cointegrated VAR analysis (Chapter 16), and model analysis (Chapter 17).
A brief summary of the underlying mathematics is given in this chapter. The order is similar to that in the computer program. We first briefly describe system formulation in §15.2 to establish notation, then system estimation in §15.3, followed by estimation output §15.4 and graphic evaluation in §15.5, and dynamic analysis and I(1) and I(2) cointegration tests in §15.9. Section 15.10 considers testing, both at the single equation level as well as at the system level. Sections 15.9.116.0.1 discuss estimating the cointegrating space, related graphs and tests of restrictions on the space. Finally §15.11 considers the progress made during system and model development.
In PcGive, a linear system, often called the unrestricted reduced form (URF), takes the form:


where y_{t}, z_{t} are respectively n×1 and q×1 vectors of observations at time t on the endogenous and nonmodelled variables. The {π_{i}} are unrestricted, except perhaps for columns of zeros, which would exclude certain y_{ti} or z_{tj} from the system. Hence each equation in the system has the same variables on the righthand side. The orders m and r of the lag polynomial matrices for y and z should be specified so as to ensure that {v_{t}} is an innovation process against the available information when the {π_{i}} matrices are constant over t. Given a data set x_{t} , then y_{t} is defined as the vector of endogenous variables and ( z_{t}...z_{tr}) must be set as nonmodelled (so they need to be at least weakly exogenous for the {π_{i}}). A system in PcGive is formulated by:
A vector autoregression (VAR) arises when there are no z variables in the statistical system (eq:15.1) (q=0, but there could be a constant, seasonals or trend) and all y have the same lag length (no columns of π are zero).
Integrated systems can be transformed to equilibrium correction form, where all endogenous variables and their lags are transformed to differences, apart from the first lag:


Returning to the notation of (eq:15.1), a more compact way of writing the system is:


where w contains z, lags of z, and lags of y: w_{t}'=( y_{t1}',...,y_{tm}',z_{t}',...,z_{tr}') . This can be further condensed by writing Y'=( y_{1} y_{2}...y_{T}) , and W', V' correspondingly:


in which Y' is (n×T), W' is (k×T) and Π is (n×k).
Since the {π_{i}} are unrestricted (except perhaps for excluding elements from w_{t}) the system (eq:15.1) can be estimated by multivariate least squares, either directly (OLS) or recursively (often denoted RLS). These estimators are straightforward multivariate extensions of the single equation methods. Analogously, estimation of (eq:15.1) requires v_{t}~ID_{n}(0,Ω), where Ω is constant over time. However, Ω may be singular owing to identities linking elements of x_{t}, and these are handled by estimating only the subset of equations corresponding to stochastic endogenous variables. If v_{t}~IN_{n}[0,Ω], OLS coincides with MLE; for notation, we note that the estimated coefficients are:




and estimated covariance matrix:




In the likelihoodbased statistics, we shall scale by T:


A listing of the system output now follows. Items marked with a * are only printed on request, either automatically through settings in the Options dialog, or by using Further Output.
The coefficients Π̂ , and their standard errors √( V[vecΠ̂ ']̂) _{ii}. Any variables marked as unrestricted appear here too.
These statistics are conventionally calculated to determine whether individual coefficients are significantly different from zero:


where the null hypothesis H_{0} is π_{ij}=0. The null hypothesis is rejected if the probability of getting a value at least as large is less than 5%(or any other chosen significance level). This probability is given as:


in which τ has a Student tdistribution with Tk degrees of freedom.
When H_{0} is true (and the model is otherwise correctly specified), a Student tdistribution is used since the sample size is often small, and we only have an estimate of the parameter's standard error: however, as the sample size increases, τ tends to a standard normal distribution under H_{0}. Large values of t reject H_{0}; but, in many situations, H_{0} may be of little interest to test. Also, selecting variables in a model according to their tvalues implies that the usual (NeymanPearson) justification for testing is not valid (see Judge, Griffiths, Hill, Lütkepohl, and Lee, 1985, for example).
The square root of the residual variance for each equation:


The RSS is ( Tk) Ω̃ _{ii}, that is, the diagonal elements of V̂'V̂.
The loglikelihood value is (including the constant K_{c}):


Then l constitutes the highest attainable likelihood value in the class (eq:15.4) (unless either the set of variables or the lag structure is altered), and hence is the statistical baseline against which simplifications can be tested. In textbook econometrics, (eq:15.4) is called the unrestricted reduced form (URF) and is usually derived from a structural representation. Here, the process is reversed: the statistical system (eq:15.4) is first specified and tested for being a congruent representation; only then is a structural (parsimonious) interpretation sought. If, for example, (eq:15.4) is not congruent, then (eq:15.13) is not a valid baseline, and subsequent tests will not have appropriate distributions. In particular, any justidentified structural representation has the same likelihood value as (eq:15.4), and hence will be invalid if (eq:15.4) is invalid: the `validity' of imposing further restrictions via a model is hardly of interest.
Define Y̌ as Y after removing the effects of the unrestricted variables, and let:


PcGive reports:
Various measures of the goodness of fit of a system can be calculated. The two reported by PcGive are:
Reports R_{r}^{2}=1 Ω̂   Ω̂ _{0} , which is an R^{2} based on the likelihoodratio principle. For a single equation system this statistic is identical to:
Reports R_{m}^{2}=1 1/n tr( Ω̂ Ω̂ _{0}) , which derives from the Lagrange Multiplier principle.
Note that these are relative to the unrestricted variables. Both measures coincide with the traditional R^{2} in a single equation, provided that the constant is the only unrestricted variable.
Significance at 5%is marked with a *, at 1%with **. Reported are:
This uses Rao's Fapproximation to test the significance of R_{r}^{2}, which amounts to testing the null hypothesis that all coefficients are zero, except those on the unrestricted variables. In a singleequation system, with only the constant unrestricted, this is identical to the reported Fstatistic.
Ftests are shown for the significance of each column of Π̂ together with their probability values (inside square brackets) under the null hypothesis that the corresponding column of coefficients is zero. So these test whether the variable at hand is significant in the system. The statistics are F( n,Tk+1n) .
Further Ftests of general to specific system modelling are available through the progress report: see §15.11.
A typical element of this matrix is:


The diagonal reports the standard deviations of the URF residuals
Prints the correlation between y_{it} and ŷ_{it} for each equation i=1,...,n.
This is only reported when observations are withheld for static forecasting when the sample size is selected.
The 1step forecast errors (from T+1 to T+H) are defined as:




The forecast error variance matrix for a single stepahead forecast is made up of a term for coefficient uncertainty and a term for innovation errors. Three types of parameter constancy tests are reported, in each case as a χ^{2}(nH) for n equations and H forecasts and an F( nH,Tk) statistic:
This is an index of numerical parameter constancy, ignoring both parameter uncertainty and intercorrelation between forecasts errors at different time periods.
This test is similar to (a), but takes parameter uncertainty into account.
Here, V[E] is the full variance matrix of all forecast errors E, which takes both parameter uncertainty and intercorrelations between forecast errors into account.
The four statistics reported are the Schwarz criterion (SC), the HannanQuinn criterion (HQ), the Final Prediction Error (FPE) and the Akaike criterion (AIC). These can be defined as:


Or, in terms of the loglikelihood:


When using Further Output will first report (eq:15.18) followed by (eq:15.19). In the latter, the constant is included in the likelihood, resulting in different outcomes. In all other cases, PcGive will only report the values based on (eq:15.19). For a discussion of the use of these and related scalar measures to choose between alternative models in a class, see Judge, Griffiths, Hill, Lütkepohl, and Lee (1985) or Lütkepohl (1991).
This reports the sample means and sample standard deviations of the selected variables, followed by the correlation matrix.
The k×k variancecovariance matrix of the estimated parameters. Along the diagonal, we have the variance of each estimated coefficient, and off the diagonal, the covariances.
Reports the individual forecasts with forecast error standard errors. If the actual values are available, the forecast error and tvalue are also printed.
Additional statistics are reported if more than two forecast errors are available:
These are the individual test statistics underlying ξ_{1} and ξ_{2} above, for i=1,...,H:


this time distributed as χ^{2}(n). They can also be viewed graphically.
RMSE = [ 1/H ∑_{t=1}^{H} (y_{t}f_{t})^{2}]^{1/2}, 
where the forecast horizon is H, y_{t} the actual values, and f_{t} the forecasts.
MAPE = 100/H ∑_{t=1}^{H}  
 . 
RMSE and MAPE are measures of forecast accuracy, see, e.g. Makridakis, Wheelwright, and Hyndman (1998, Ch. 2). Note that the MAPE can be infinity if any y_{t}=0, and is different when the model is reformulated in differences. For more information see Clements and Hendry (1998a).
Graphic analysis focuses on graphical inspection of individual equations. Let y_{t}, ŷ_{t} denote respectively the actual (that is, observed) values and the fitted values of the selected equation, with residuals v̂_{t} = y_{t}ŷ_{t}, t=1,...,T. If H observations are retained for forecasting, then ŷ_{T+1},...,ŷ_{T+H} are the 1step forecasts.
Many different types of graph are available:
This is a graph showing the fitted ( ŷ_{t}) and actual values ( y_{t}) of the dependent variable over time, including the forecast period.
ŷ_{t} against y_{t}, including the forecast period.
( v̂_{t}/σ̃ ) , where σ̃ ^{2} is the estimated equation error variance, plotted over t=1,...,T+H.
The 1step forecasts can be plotted in a graph over time: y_{t} and ŷ_{t}, t=T+1,...,T+H, are shown with error bars of ±2SE( e_{t}) , and centred on ŷ_{t} (that is, an approximate 95%confidence interval for the 1step forecast). Corresponding to (eq:15.16) the forecast errors are e_{t}=y_{t}ŷ_{t} and SE[ e_{t}] is derived from (eq:15.17). The error bars can be replaced by bands, set in Options, and the number of preforecast observations can be selected.
Plots the histogram of the standardized residuals, the estimated density f_{v}(^{.})̂ and a normal distribution with the same mean and variance (more details are in the OxMetrics book).
This plots the series {r_{j}} where r_{j} is the correlation coefficient between v̂_{t} and v̂_{tj}. The length of the correlogram is specified by the user, leading to a Figure that shows ( r_{1},r_{2},...,r_{s}) plotted against ( 1,2,...,s) where for any j:


where v is the sample mean of v_{t}.
This plots the Partial autocorrrelation function (see the OxMetrics book).
These are the Chow tests using V[ e] of (eq:15.20), available from T+1 to T+H, together with a fixed 5% critical value from χ^{2}( n) . These are not scaled by their critical values, unlike the graphs in recursive graphics.
( v̂_{t}) over t;
This plots the estimated spectral density (see the OxMetrics book) using v̂_{t} as the x_{t} variable.
Shows a QQ plot of the residuals.
The histogram of the scaled residuals and the nonparametrically estimated density f_{v}(^{.})̂ are graphed using the settings described in the OxMetrics book.
Plots the distribution based on the nonparametrically estimated density.
Let v̂_{it}, v̂_{jt} denote the residuals of equation i and j. This graph shows the crossplot of v̂_{it} against v̂_{jt} for all marked equations (i≠j), over t=1,...,T+H.
The residuals can be saved to the database for further inspection.
When recursive OLS (RLS) is selected, the Π matrix is estimated at each t ( 1≤M≤t≤T) where M is userselected. Unlike previous versions, there is no requirement that k≤M. So OLS is used for observations 1...M1, RLS for M...T. The calculations proceed exactly as for the single equation case since the formulae for updating are unaffected by Y being a matrix rather than a vector. Indeed, the relative cost over single equation RLS falls; but the huge number of statistics (nk( TM+1) coefficients alone) cannot be stored in PcGive. Consequently, the graphical output omits coefficients and their tvalues. Otherwise the output is similar to that in single equations, but now available for each equation in the system. In addition, system graphs are available, either of the log likelihood, or of the system Chow tests. At each t, system estimates are available, for example coefficients Π_{t} and residuals v_{t}=y_{t}Π_{t}w_{t}. Unrestricted variables have their coefficients fixed at the full sample values. Define V_{t}' as ( v_{1}v_{2}...v_{t}) and let y_{t}, v_{t}, w_{t} denote the endogenous variable, residuals and regressors of equation i at time t.
The following graphs are available for the system (the information can be printed on request):
The residual sum of squares RSS_{t} for equation i is the i^{th} diagonal element of V̂_{t}'V̂_{t} for t=M,...,T.
The 1step residuals v̂_{t} are shown bordered by 0±2σ̃ _{t} over M,...,T. Points outside the 2standarderror region are either outliers or are associated with coefficient changes.


Per definition: l̂_{t}≥l̂_{t+1}. This follows from the fact that both can be derived from a system estimated up to t+1, where l̂_{t} obtains from the system with a dummy for the last observation, so that l̂_{t+1} is the restricted likelihood. On the other hand: l̂ _{t}!≥l̂ _{t+1}, as this would still require the sample size correction as employed in l̂_{t}. Note that the constant is excluded from the loglikelihood here.
1step forecast tests are F( 1,tk1) under the null of constant parameters, for t=M,...,T. A typical statistic is calculated as:


Normality of y_{t} is needed for this statistic to be distributed as an F.
Breakpoint Ftests are F( Tt+1,tk1) for t=M,...,T. These are, therefore, sequences of Chow tests and are called N↓ because the number of forecasts goes from TM+1 to 1. When the forecast period exceeds the estimation period, this test is not necessarily optimal relative to the covariance test based on fitting the model separately to the split samples. A typical statistic is calculated as:


This test is closely related to the CUSUMSQ statistic in Brown, Durbin, and Evans (1975).
Forecast Ftests are F( tM+1,Mk1) for t=M,...,T, and are called N↑ as the forecast horizon increases from M to t. This tests the model over 1 to M1 against an alternative which allows any form of change over M to T. Thus, unless M>k, blank graphs will result. A typical statistic is calculated as:


This uses Rao's Fapproximation, with the R^{2} computed as:


This uses Rao's Fapproximation, with the R^{2} computed as:


This uses Rao's Fapproximation, with the R^{2} computed as:


The statistics in (4) and (5) are variants of Chow (1960) tests: they are scaled by oneoff critical values from the Fdistribution at any selected probability level as an adjustment for changing degrees of freedom, so that the significance values become a straight line at unity. Selecting a probability of 0 or 1 results in unscaled statistics. Note that the first and last values of (eq:15.23) respectively equal the first value of (eq:15.25) and the last value of (eq:15.24); the same relation holds for the system tests. When the system tests of (5) are computed for a single equation system, they are identical to the tests computed under (4).
Dynamic (or multiperiod or ex ante) system forecasts can be graphed. Commencing from period T as initial conditions:


Such forecasts require data on ( z_{T+1}...z_{T+H}) for Hperiods ahead (but the future values of ys are not needed), and to be meaningful also require that z_{t} is strongly exogenous and that Π remains constant. Dynamic forecasts can be viewed with or without `error bars' (or bands) based on the equation errorvariance matrix only, where the variance estimates are given by the (n ×n) top left block of:


Optionally, parameter uncertainty can be taken into account when computing the forecast error variances (but not for hstep forecasts): this is allowed only when there are no unrestricted variables. Using the companion matrix, which is (nm×nm):


Thus, uncertainty owing to the parameters being estimated is presently ignored (compare this to the parameter constancy tests based on the 1step forecasts). If q>0, the nonmodelled variables could be perturbed as in `scenario studies' on nonlinear models, but here with a view to assessing the robustness or fragility of ex ante forecasts to possible changes in the conditioning variables. If such perturbations alter the characteristics of the {z_{t}} process, super exogeneity is required.
It is also possible to graph hstep forecasts, where h≤H. This uses (eq:15.29), but with:


To be consistent with the definition of hstep forecasts, what is graphed is the sequence of 1,...,h1 step forecasts for T+1,...T+h1, followed by hstep forecasts from T+h,...T+H. In other words: up to T+h are dynamic forecasts, from then on hstep forecasts up to T+H. Thus, unless there is available data not used in estimation, the hstep forecasts are just dynamic forecasts up to the value of h: this data can be reserved by using a shorter estimation sample, or setting a nonzero number of forecasts. After h forecasts, the forecast error variance remains constant at ∑_{i=0}^{h1}D̂^{i}Õ D̂^{i'}. For example, 1step forecasts use ŷ_{ti}=y_{ti} for ti≤ max ( T,t1) , and hence never use forecasted values (in this case max ( T,t1) =t1, as t≥T+1). The 1step forecast error variance used here is Ω̃ , which differs from (eq:15.17) in that it ignores the parameter uncertainty. Selecting h=H yields the dynamic forecasts.
To summarize, the following graphs are available:
Graphs can be without standard errors; or standard errors can be plotted as the forecast ±2SE, either errorvariance based or parametervariance based (i.e., full estimation uncertainty, if no unrestricted variables).
Graphs can be without standard errors; or errorvariance based standard errors can be plotted as the forecast ±2SE.
The forecasts and standard errors can be printed. Although dynamic forecasting is not available for a system with identities, it can be obtained by mapping the system to a model of itself and specifying the identity equations.
The remaining forecast options, as discussed in Volume I, extend in a straightforward manner to multivariate models.
The system can be dynamically simulated from any starting point within sample, ending at the last observation used in the estimation. Computation is as in (eq:15.29), with T+1 replaced by the chosen withinsample starting point. Given the critiques of dynamic simulation as a method of evaluating econometric systems in Hendry and Richard (1982) and Chong and Hendry (1986), this option is included to allow users to see how misleading simulation tracks can be as a guide to selecting systems. Also see Pagan (1989). Like dynamic forecasting, dynamic simulation is not available for a system with identities, but can be obtained by mapping the system to a model of itself and specifying the identity equations.
Let y_{t}, ŷ_{t}, ŝ_{t} denote respectively the actual (that is, observed) values, the fitted values (from the estimation) and the simulated values of the selected equation, t=M,...,M+H. H is the number of simulated values, starting from M, 1≤M<T.
Four different types of graph are available:
This is a graph showing the simulated ( ŝ_{t}) and actual values ( y_{t}) of the dependent variable over time.
Crossplot of ŷ_{t} against ŝ_{t}.
ŷ_{t} and ŝ_{t} against time.
Graphs ( y_{t}ŝ_{t}) over time.
Impulse response analysis disregards the nonmodelled variables and sets the history to zero, apart from the initial values i_{1}:


where î_{1} are the initial values, and î_{t}=0 for t≤0. This generates n^{2} graphs, where the j^{th} set of n graphs gives the response of the n endogenous variables to the j^{th} initial values. These initial values i_{1,j} for the j^{th} set of graphs can be chosen as follows:
i_{1,j}=e_{j}: 1 for the j^{th} variable, 0 otherwise.
i_{1,j}=σ̃ _{j}: the j^{th} residual standard error for the j^{th} variable, 0 otherwise.
Take the Choleski decomposition of Ω̃ , Ω̃ =PP', so that P=( p_{1}...p_{n}) has zeros above the diagonal. The orthogonalized initial values are i_{1,j}=p_{j}. Thus, the outcome depends on the ordering of the variables.
Graphing is optionally of the accumulated response: ∑_{t=1}^{h}î_{t}, h=1,...,H.
After estimation, a dynamic analysis of the unrestricted reduced form (system) can be performed. Consider the system (eq:15.2), but replace the π_{m+j+1} by Γ_{j}:


with y_{t} (n×1) and z_{t} ( q×1) . Use the lag operator L, defined as Ly_{t}=y_{t1}, to write this as:


So π(1)=π_{1}+...+π_{m}, with m the longest lag on endogenous variable(s); and Γ(1)=Γ_{0}+...+Γ_{r}, with r the longest lag on nonmodelled variable(s). P̂_{0}=π̂ (1)I_{n} can be inverted only if it is of rank p=n, in which case for q>0, y and z are fully cointegrated. If p<n, only a subset of the ys and zs are cointegrated. If P̂_{0} can be inverted, we can write the estimated static longrun solution as:


If q=0, the system is closed (that is, a VAR), and (eq:15.36) is not defined. However, P_{0} can still be calculated, and then p characterizes the number of cointegrating vectors linking the ys. We use ^{+} to denote that the outcome is reported only if there are nonmodelled variables.
If there are no identities PcGive computes:
Dynamic analysis is not available for a system with identities, but are available again after estimating the simultaneous equations model.
When the system is closed in the endogenous variables, express P_{0} in (eq:15.2) as αβ', where α and β are ( n×p) matrices of rank p. Although v_{t}~IN_{n}[0,Ω], and so is stationary, the n variables in y_{t} need not all be stationary. The rank p of P_{0} determines how many linear combinations of y_{t} are stationary. If p=n, all variables in y_{t} are stationary, whereas p=0 implies that Δy_{t} is stationary. For 0<p<n, there are p cointegrated (stationary) linear combinations of y_{t}. The rank of P_{0} is estimated using the maximum likelihood method proposed by Johansen (1988), summarized here.
First, partial out from Δy_{t} and y_{t1} in (eq:15.2) the effects of the lagged differences ( Δy_{t1}...Δy_{tm+1}) and any variables classified as unrestricted (usually the Constant or Trend, but any other variable is allowed as discussed below). This yields the residuals R_{0t} and R_{1t} respectively. Next compute the second moments of all these residuals, denoted S_{00}, S_{01} and S_{11} where:


Now solve λS_{11}S_{10}S_{00}^{1}S_{01}=0 for the p largest eigenvalues 1>λ̂ _{1}>...>λ̂ _{p}...>λ̂ _{n}>0 and the corresponding eigenvectors:


Then, tests of the hypothesis of p cointegrating vectors can be based on the trace statistic:


The cointegrating combinations β'y_{t1} are the I(0) linear combinations of the I(1) variables which can be used as equilibrium correction mechanisms (ECMs).
Any nonendogenous variables z_{t} can enter in two ways:
If lagged values z_{t}, ...,z_{tm} enter, reparametrize as Δz_{t}^{u}, ...,Δz_{tm+1}^{u},z_{t1}^{r}.
Output of the I(1) cointegration analysis is:


The test statistics η_{p} for H(rank ≤p) are listed with pvalues based on Doornik (1998); * and ** mark significance at 95% and 99%. Testing commences at H(rank = 0), and stops at the first insignificant statistics.
The asymptotic pvalues are available for the following cases:
Hypothesis  Constant  Trend 
H_{ql}(p)  unrestricted  unrestricted 
H_{l}(p)  unrestricted  restricted 
H_{lc}(p)  unrestricted  none 
H_{c}(p)  restricted  none 
H_{z}(p)  none  none 
In the I(2) analysis, which requires at least two lags of the endogenous variables, there is potentially an additional reduced rank restriction on the longrun matrix Γ of the model in first differences (equilibrium correction form):
α'_{⊥}Γβ_{⊥} = ξ'η, 
where ξ and η are (np) ×s matrices. In the analysis we have the parameter counts

The test statistics are

For example, when n=4 PcGive will print a table consisting of eigenvalues λ_{p+1}, λ_{p+1,s+1}, test statistics Q_{p} and S_{p,s}, and pvalues for p=0,...,3 and nps=1,...,4 as follows:

Hypothesis testing normally proceeds down the columns, from top left to bottom right, stopping at the first insignificant test statistic.
Many test statistics in PcGive have either a χ^{2} distribution or an F distribution. Ftests are usually reported as:
F(num,denom) = Value [Probability] /*/**
for example:
F(1, 155) = 5.0088 [0.0266] *
where the test statistic has an Fdistribution with one degree of freedom in the numerator and 155 in the denominator. The observed value is 5.0088, and the probability of getting a value of 5.0088 or larger under this distribution is 0.0266. This is less than 5%but more than 1%, hence the star. Significant outcomes at a 1%level are shown by two stars.
χ^{2} tests are also reported with probabilities, as for example:
Normality Chi^{2}(2)= 2.1867 [0.3351]
The 5%χ^{2} critical values with two degrees of freedom is 5.99, so here normality is not rejected (alternatively, Prob(χ^{2}≥ 2.1867) = 0.3351, which is more than 5%).
The probability values for the Ftest are
calculated using an algorithm based on Majunder and Bhattacharjee (1973a) and Cran, Martin, and Thomas (1977).
Those for the χ^{2} are based on Shea (1988). The significance points of the Fdistribution derive from Majunder and Bhattacharjee (1973b).
Some tests take the form of a likelihood ratio (LR) test. If l is the unrestricted, and l_{0} the restricted, loglikelihood, then under the null hypothesis that the restrictions are valid, 2(l_{0}l) has a χ^{2}(s) distribution, with s the number of restrictions imposed (so model l_{0} is nested in l).
Many diagnostic tests are done through an auxiliary regression. In the case of singleequation tests, they take the form of TR^{2} for the auxiliary regression, so that they are asymptotically distributed as χ^{2}( s) under their nulls, and hence have the usual additive property for independent χ^{2}s. In addition, following Harvey (1990) and Kiviet (1986), Fapproximations of the form:


are calculated because they may be better behaved in small samples.
Whenever the vector tests are implemented through an auxiliary multivariate regression, PcGive uses vector analogues of the χ^{2} and F statistics. The first is an LM test in the auxiliary system, defined as TnR_{m}^{2}, the second uses the F approximation based on R_{r}^{2}. The vector tests reduce to the singleequation tests in a oneequation system. All tests are summarized below.
Diagnostic testing in PcGive is performed at two levels: individual equations and the system as a whole. Individual equation diagnostics take the residuals from the system, and treat them as from a single equation, ignoring that they form part of a system. Usually this means that they are only valid if the remaining equations are problemfree.
This is a degreesoffreedom corrected version of the Box and Pierce (1970) statistic. It is only a valid test in a single equation with strongly exogenous variables. If s is the chosen lag length and m the lag length of the dependent variable, values ≥2(sm) could indicate residual autocorrelation. Conversely, small values of this statistic should be treated with caution as residual autocorrelations are biased towards zero when lagged dependent variables are included in econometric equations. An appropriate test for residual autocorrelation is provided by the LM test for autocorrelated residuals. The autocorrelation coefficients r_{j}, see (eq:15.21), are also reported.
This test is performed through the auxiliary regression of the residuals on the original variables and lagged residuals (missing lagged residuals at the start of the sample are replaced by zero, so no observations are lost). Unrestricted variables are included in the auxiliary regression. The null hypothesis is no autocorrelation, which would be rejected if the test statistic is too high. This LM test is valid for systems with lagged dependent variables and diagonal residual autocorrelation, whereas neither the DurbinWatson nor the residual autocorrelations provide a valid test in that case. The χ^{2} and Fstatistic are shown, as are the error autocorrelation coefficients, which are the coefficients of the lagged residuals in the auxiliary regression.
This is the ARCH test (AutoRegressive Conditional Heteroscedasticity: see Engle, 1982) which in the present form tests the joint significance of lagged squared residuals in the regression of squared residuals on a constant and lagged squared residuals. The χ^{2} and Fstatistic are shown, in addition to the ARCH coefficients, which are the coefficients of the lagged squared residuals in the auxiliary regression.
This is the test proposed by Doornik and Hansen (1994), and amounts to testing whether the skewness and kurtosis of the residuals correspond to those of a normal distribution. Before reporting the actual test, PcGive reports the following statistics of the residuals: mean (0 for the residuals), standard deviation, skewness (0 in a normal distribution), excess kurtosis (0 in a normal distribution), minimum and maximum.
This test is based on White (1980), and involves an auxiliary regression of the squared residuals on the original regressors and all their squares. The null is unconditional homoscedasticity, and the alternative is that the variance of the error process depends on the regressors and their squares. The output comprises TR^{2}, the Ftest equivalent, and the coefficients of the auxiliary regression plus their individual tstatistics to help highlight problem variables. Unrestricted variables are excluded from the auxiliary regression, but a constant is always included. Variables that are redundant when squared or collinear are automatically removed.
The present incarnation of PcGive has various formal system misspecification tests for withinsample congruency.
This is the multivariate equivalent of the singleequation portmanteau statistic (again using a smallsample correction), and only a valid asymptotic test in a VAR.
Lagged residuals (with missing observations for lagged residuals set to zero) are partialled out from the original regressors, and the whole system is reestimated, providing a Lagrangemultiplier test based on comparing the likelihoods for both systems.
This is the multivariate equivalent of the aforementioned single equation normality test. It checks whether the residuals at hand are normally distributed as:


by checking their skewness and kurtosis. A χ^{2}(2n) test for the null hypothesis of normality is reported, in addition to the transformed skewness and kurtosis of the rotated components.
This test amounts to a multivariate regression of all error variances and covariances on the original regressors and their squares. The test is χ^{2}(sn(n+1)/2), where s is the number of nonredundant added regressors (collinear regressors are automatically removed). The null hypothesis is no heteroscedasticity, which would be rejected if the test statistic is too high. Note that regressors that were classified as unrestricted are excluded.
This test is similar to the heteroscedasticity test, but now crossproducts of regressors are added as well. Again, the null hypothesis is no heteroscedasticity (the name functional form was used in version 8 of PcGive).
Writing θ̂ = vecΠ̂ ', with corresponding variancecovariance matrix V[ θ̂ ] , we can test for (non) linear restrictions of the form:


The null hypothesis H_{0}: f(θ)=0 will be tested against H_{1}: f(θ)≠0 through a Wald test:


where J is the Jacobian matrix of the transformation: J=∂f(θ)/∂θ'. PcGive computes Ĵ by numerical differentiation. The statistic w has a χ^{2}(s) distribution, where s is the number of restrictions (that is, equations in f(^{.})). The null hypothesis is rejected if we observe a significant test statistic.
Output consists of:
PcGive can be used in two ways: for generaltospecific modelling, and for unordered searches.
In the generaltospecific approach:
Nothing commends unordered searches:
PcGive does not enforce a generaltosimple modelling strategy, but it will automatically monitor the progress of the sequential reduction from the general to the specific, and will provide the associated likelihoodratio tests.
More precisely, the program will record a sequence of systems, and for the most recent system the sequence of models (which could be empty). The program gives a list of the selected systems and models, reporting the estimation method, sample size (T), number of coefficients (k), the loglikelihood (K_{c} 2/T log  Ω̂  ). Three information criteria are also reported: the Schwarz criterion, the HannanQuinn criterion and the Akaike criterion, see §15.4.6.
Following this, PcGive will report the Ftests (based on Rao's Fapproximation) indicating the progress in system modelling, as well as likelihoodratio tests (χ^{2}) of the progress in modelling that system (tests of overidentifying restrictions).
Following on from §15.9.1, it is possible to estimate a cointegrated VAR which has a reduced rank longrun matrix P_{0}=αβ', or, possibly, additional restrictions on α or β.
Following estimation of a cointegrated VAR, most evaluation facilities of the unrestricted system are available, but with the π_{i} in (eq:15.1) replaced with the π̂_{i} from the restricted VAR. Note that this is different from version 9 of PcFiml, where evaluation was still based on the unrestricted system.
All β̂ and α̂ below relate to the restricted estimates, according to the selected rank p, and any additional restrictions imposed in the cointegrated VAR estimation (note that it is possible to impose no restrictions at all).
Within a cointegration VAR analysis, restrictions on α and β can be imposed:
PcGive requires you to choose the rank p. For (1)(5), the restrictions are expressed through the A and/or H matrix. The general restrictions of (6) are expressed directly in terms of the elements of α and β'.
Output of the cointegration estimation is:
Partition β̂ ' as:
( 
 ) 
where β̂ _{11}' is the top left ( p×p) block of β̂ ', then when β̂ _{11}' is nonsingular, the reduced form matrix is:
( β̂ _{11}') ^{1}β̂ _{12}'. 
This is followed by:
Some graphs additional to those listed in §15.5 are available for a cointegrated VAR.
For generality, assume that the variables z_{r} were restricted to lie in the cointegrating space. Let α_{0}, β_{0}' denote the original standardized loadings and eigenvectors; α_{r}, β_{r}' are obtained after imposing further restrictions on the cointegrating space. In the unrestricted graphs, the analysis proceeds as if no rank has been chosen yet, corresponding to n eigenvectors. The restricted analysis requires selection of p, the rank of the cointegrating space, thus resulting in fewer graphs.
Write (y;z) for (y';z'), and let (y_{t};z_{r}) denote the original levels of the endogenous variables and the variables restricted to lie in the cointegrating space; r_{1t}=(y̌_{t1};ž_{r}) are the residuals from regressing (y_{t1};z_{r}) on the shortrun dynamics ({Δy_{ti}}) and unrestricted variables (z_{u}). For all graphs there are two variants:
This uses (y_{t};z_{r}).
This uses r_{1t}.
The available graphs are:
β̂ _{0}'(y_{t};z_{r}), or β̂ _{0}'r_{1t}. Write the standardized i^{th} eigenvector as ( β_{1}...β_{n} β_{n+1} ...β_{n+qr}) ', standardized so that β_{i}=1. The i^{th} cointegration relation graph is: ∑_{j}β_{j}y_{jt}+∑_{k}β_{k}z_{kt} and using concentrated components: ∑_{j}β_{j}y̌_{jt1}+∑_{k}β_{k}ž_{kt}.
The graphs of the cointegrating relations are split into two components: the actuals y_{t} and the fitted values y_{t}β̂ _{0}'(y_{t};z_{r}). All lines are graphed in deviation from mean. Alternatively: the y̌_{t1} and the fitted values y̌_{t1}β̂ _{0}'r_{1t}, in deviation from mean. Considering the i^{th} graph of actual and fitted, using the above notation for the standardized i^{th} eigenvector: y_{it} and y_{it}∑_{j}β_{j}y_{jt}∑_{k}β_{k}z_{kt}=∑_{j≠i}β_{j}y_{jt}∑_{k}β_{k}z_{kt} whereas using concentrated components: y_{it} and ∑_{j≠i}β_{j}y̌_{jt1}∑_{k}β_{k}ž_{kt}.
Graphs all the components of β̂ _{0}'(y_{t};z_{r}) or β̂ _{0}'r_{1t}, in deviations from their means. For the i^{th} graph: y_{it},β_{j}y_{jt}( j≠i) ,β_{k}z_{kt} all in deviation from their means. Using concentrated components: y̌_{it},β_{j}y̌_{jt1}( j≠i) ,β_{k}ž_{kt} also in deviation from means.
Recursive graphics is available when the cointegrated VAR is estimated recursively. Unrestricted variables and shortrun dynamics can be fixed at their fullsample coefficients, or partialled out at each sample size.
The types of graphs are:
Only available if additional restrictions was used (but this can be used without specifying any code, i.e. without any restrictions).
The χ^{2} test for the restrictions; its critical value is also shown. Only available if longrun restrictions were imposed. The pvalue can be set.
Recursive βs. Only available if β is identified.
Once a statistical system has been adequately modelled and its congruency satisfactorily evaluated, an economically meaningful structural interpretation can be sought. The relevant class of model has the form:


The diagonal of B is normalized at unity. More concisely:


with A=( B:C) and X=( Y:W) . PcGive accepts only linear, withinequation restrictions on the elements of A for the initial specification of the identified model, but allows for further nonlinear restrictions on the parameters (possibly across equations). The order condition for identification is enforced, and the rank condition is required to be satisfied for arbitrary (random) nonzero values of the parameters.
A subset of the equations can be identities, but otherwise Σ is assumed to be positive definite and unrestricted. When identities are present, the model to be estimated is written as:


where A_{1}X'=U' is the subset of n_{1} stochastic equations and A_{2}X'=0 is the subset of n_{2} identities with n_{1}+n_{2}=n. PcGive requires specification of the variables involved in the identities, but will derive the coefficients A_{2}.
Let φ denote the vector of unrestricted elements of vec(A_{1}'): φ=A_{1}^{vu}. Then l( φ) is to be minimized as an unrestricted function of the elements of φ. On convergence, we have the maximum likelihood estimator (MLE) of φ:


and so have the MLE of A_{1}; as all other elements of A_{2} are known, we have the MLE of A. If convergence does not occur, reset the parameter values, and use a looser (larger) convergence criterion to obtain output.
The estimated variance of u_{t} is:


which is ( n_{1}×n_{1}) . There is a degreesoffreedom correction c, which equals the average number of parameters per equation (rounded towards 0); this would be k for the system.
From A, we can derive the MLE of the restricted reduced form:


and hence the estimated variances of the elements of φ̂ :


where, before inversion, we choose the rows and columns of the righthand side corresponding to unrestricted elements of A_{1} only, and Q'=( Π':I) .
The covariance matrix of the restricted reduced form residuals is obtained by writing:


where B^{11} is ( n_{1}×n_{1}) . Then:




corresponding to the stochastic equations. The estimated variance matrix of the restricted reduced form coefficients is:


Model estimation follows the successful estimation of the unrestricted reduced form. The sample period and number of forecasts carry over from the system.
The following information is needed to estimate a model:
is available when selecting constrained simultaneous equations model.
All model estimation methods in PcGive are derived from the estimatorgenerating equation (EGE). We require the reduced form to be a congruent data model, for which the structural specification is a more parsimonious representation.
The model output coincides to a large extent with the system output. In the following we only note some differences:
Gives the coefficients of the n_{2} identity equations, together with the R^{2} of each equation, which should be 1 (values ≥.99 are accepted).
The tprobabilities are based on a Student tdistribution with Tc degrees of freedom. The correction c is defined below equation (eq:17.5).
The square root of the structural residual variance for each equation:


The loglikelihood value is (including the constant K_{c}):


Reported are l̂ , T/2 log Ω̂ _{11}, Ω̂ _{11} and the sample size T.
This tests whether the model is a valid reduction of the system.
HCSE for short; computed for FIML only, but not for unrestricted variables. These provide consistent estimates of the regression coefficients' standard errors even if the residuals are heteroscedastic in an unknown way. Large differences between the HCSE and SE are indicative of the presence of heteroscedasticity, in which case the HCSE provides the more useful measure of the standard errors (see White, 1980). They are computed as: Q^{1}IQ^{1}, Q=V[φ̂ ], I=∑_{t=1}^{T}q_{t}q_{t}', the outer product of the gradients.
Graphic analysis focuses on graphical inspection of individual restricted reduced form equations. Let y_{t}, ŷ_{t} denote respectively the actual (that is, observed) values and the fitted values of the selected equation, with RRF residuals v̂_{t} = y_{t}ŷ_{t}, t=1,...,T. If H observations are used for forecasting, then ŷ_{T+1},...,ŷ_{T+H} are the 1step forecasts.
Except for substituting the (restricted) reduced form residuals, graphic analysis follows the unrestricted system, see §16.2.
When recursive FIML or CFIML is selected, the φ and Σ matrices are estimated at each t ( k≤M≤t≤T) where M is user selected. For each t, the RRF can be derived from this.
The recursive graphics options follow §15.6, with the addition of the tests for overidentifying restrictions.
Let l̂ _{t} be the loglikelihood of the URF, and l̂ _{0,t} the loglikelihood of the RRF. The tests for overidentifying restrictions, 2( l̂ _{t}l̂ _{0,t}) , can be graphed with a line graphing the critical value from the χ^{2}(s) distribution (s is the number of restrictions) at a chosen significance level.
These proceed as for the system, but based on the restricted reduced form. Graphs are available for identity equations.
Impulse response analysis maps the dynamics of the endogenous variables through the restricted reduced form. The initial values i_{1,j} for the j^{th} set of graphs can be chosen as follows:
i_{1,j}=B^{1}e_{j}.
i_{1,j}=B^{1}e_{j}σ̃ _{jj},
where σ̃ _{jj} is the jth diagonal element of Σ̃ .
i_{1,j}=B^{1} (
p_{j} 
0 
where p_{j} is the jth column of the Choleski decomposition of Σ̃ , and it is padded with zeros for identity equations. As in the system, the outcome depends on the ordering of the variables.
i_{1,j}=B^{1}v_{j},
where v_{j} is specified by the user.
The vector error autocorrelation test partials lagged structural residuals out from the original regressors, and reestimates the model. All other tests take the residuals from the RRF, and operate as for the system.
Note, however, that application of singleequation autocorrelation and heteroscedasticity tests in a model will lead to all reducedform variables being used in the auxiliary regression. If the model is an invalid reduction of the system, this may cause the tests to be significant. Equally, valid reduction combined with small amounts of system residual autocorrelation could induce significant singleequation model autocorrelation. The usual difficulty of interpreting significant test outcomes is prominent here.
A similar feature operates for the vector heteroscedasticity tests, where all reducedform variables (but not those classified as unrestricted) are used in the auxiliary regression.
Anderson, T. W. (1984). An Introduction to Multivariate Statistical Analysis, (2nd ed.). New York: John Wiley & Sons.
Banerjee, A., J. J. Dolado, J. W. Galbraith, and D. F. Hendry (1993). Cointegration, Error Correction and the Econometric Analysis of NonStationary Data. Oxford: Oxford University Press.
Banerjee, A. and D. F. Hendry (Eds.) (1992). Testing Integration and Cointegration. Oxford Bulletin of Economics and Statistics: 54.
Bårdsen, G. (1989). The estimation of long run coefficients from error correction models. Oxford Bulletin of Economics and Statistics 50.
Berndt, E. K., B. H. Hall, R. E. Hall, and J. A. Hausman (1974). Estimation and inference in nonlinear structural models. Annals of Economic and Social Measurement 3, 653665.
Boswijk, H. P. (1992). Cointegration, Identification and Exogeneity, Volume 37 of Tinbergen Institute Research Series. Amsterdam: Thesis Publishers.
Boswijk, H. P. (1995). Identifiability of cointegrated systems. Discussion paper ti 795078, Tinbergen Institute, University of Amsterdam.
Boswijk, H. P. and J. A. Doornik (2004). Identifying, estimating and testing restricted cointegrated systems: An overview. Statistica Neerlandica 58, 440465.
Bowman, K. O. and L. R. Shenton (1975). Omnibus test contours for departures from normality based on √b_{1} and b_{2}. Biometrika 62, 243250.
Box, G. E. P. and D. A. Pierce (1970). Distribution of residual autocorrelations in autoregressiveintegrated moving average time series models. Journal of the American Statistical Association 65, 15091526.
Britton, E., P. Fisher, and J. Whitley (1998). Inflation Report projections: Understanding the fan chart. Bank of England Quarterly Bulletin 38, 3037.
Brown, R. L., J. Durbin, and J. M. Evans (1975). Techniques for testing the constancy of regression relationships over time (with discussion). Journal of the Royal Statistical Society B 37, 149192.
Calzolari, G. (1987). Forecast variance in dynamic simulation of simultaneous equations models. Econometrica 55, 14731476.
Campbell, J. Y. and P. Perron (1991). Pitfalls and opportunities: What macroeconomists should know about unit roots. In O. J. Blanchard and S. Fischer (Eds.), NBER Macroeconomics annual 1991. Cambridge, MA: MIT press.
Chong, Y. Y. and D. F. Hendry (1986). Econometric evaluation of linear macroeconomic models. Review of Economic Studies 53, 671690. Reprinted in Granger, C. W. J. (ed.) (1990), Modelling Economic Series. Oxford: Clarendon Press; and in Campos, J., Ericsson, N.R. and Hendry, D.F. (eds.), General to Specific Modelling. Edward Elgar, 2005.
Chow, G. C. (1960). Tests of equality between sets of coefficients in two linear regressions. Econometrica 28, 591605.
Clements, M. P. and D. F. Hendry (1994). Towards a theory of economic forecasting. See Harg94, pp. 952.
Clements, M. P. and D. F. Hendry (1998a). Forecasting Economic Time Series. Cambridge: Cambridge University Press.
Clements, M. P. and D. F. Hendry (1998b). Forecasting Economic Time Series: The Marshall Lectures on Economic Forecasting. Cambridge: Cambridge University Press.
Clements, M. P. and D. F. Hendry (1999). Forecasting Nonstationary Economic Time Series. Cambridge, Mass.: MIT Press.
Coyle, D. (2001). Making sense of published economic forecasts. In D. F. Hendry and N. R. Ericsson (Eds.), Understanding Economic Forecasts, pp. 5467. Cambridge, Mass.: MIT Press.
Cramer, J. S. (1986). Econometric Applications of Maximum Likelihood Methods. Cambridge: Cambridge University Press.
Cran, G. W., K. J. Martin, and G. E. Thomas (1977). A remark on algorithms. AS 63: The incomplete beta integral. AS 64: Inverse of the incomplete beta function ratio. Applied Statistics 26, 111112.
D'Agostino, R. B. (1970). Transformation to normality of the null distribution of g_{1}. Biometrika 57, 679681.
Davidson, J. E. H., D. F. Hendry, F. Srba, and J. S. Yeo (1978). Econometric modelling of the aggregate timeseries relationship between consumers' expenditure and income in the United Kingdom. Economic Journal 88, 661692. Reprinted in Hendry, D. F., Econometrics: Alchemy or Science? Oxford: Blackwell Publishers, 1993, and Oxford University Press, 2000; and in Campos, J., Ericsson, N.R. and Hendry, D.F. (eds.), General to Specific Modelling. Edward Elgar, 2005.
Davidson, R. and J. G. MacKinnon (1993). Estimation and Inference in Econometrics. New York: Oxford University Press.
Dhrymes, P. J. (1984). Mathematics for Econometrics, (2nd ed.). New York: SpringerVerlag.
Doornik, J. A. (1995a). Econometric Computing. Oxford: University of Oxford. Ph.D Thesis.
Doornik, J. A. (1995b). Testing general restrictions on the cointegrating space. www.doornik.com, Nuffield College.
Doornik, J. A. (1996). Testing vector autocorrelation and heteroscedasticity in dynamic models. www.doornik.com, Nuffield College.
Doornik, J. A. (1998). Approximations to the asymptotic distribution of cointegration tests. Journal of Economic Surveys 12, 573593. Reprinted in M. McAleer and L. Oxley (1999). Practical Issues in Cointegration Analysis. Oxford: Blackwell Publishers.
Doornik, J. A. (2013). ObjectOriented Matrix Programming using Ox (7th ed.). London: Timberlake Consultants Press.
Doornik, J. A. and H. Hansen (1994). A practical test for univariate and multivariate normality. Discussion paper, Nuffield College.
Doornik, J. A. and D. F. Hendry (1992). PCGIVE 7: An Interactive Econometric Modelling System. Oxford: Institute of Economics and Statistics, University of Oxford.
Doornik, J. A. and D. F. Hendry (1994). PcGive 8: An Interactive Econometric Modelling System. London: International Thomson Publishing, and Belmont, CA: Duxbury Press.
Doornik, J. A. and D. F. Hendry (2013). OxMetrics: An Interface to Empirical Modelling (7th ed.). London: Timberlake Consultants Press.
Doornik, J. A., D. F. Hendry, and B. Nielsen (1998). Inference in cointegrated models: UK M1 revisited. Journal of Economic Surveys 12, 533572. Reprinted in M. McAleer and L. Oxley (1999). Practical Issues in Cointegration Analysis. Oxford: Blackwell Publishers.
Doornik, J. A. and R. J. O'Brien (2002). Numerically stable cointegration analysis. Computational Statistics &Data Analysis 41, 185193.
Durbin, J. (1988). Maximum likelihood estimation of the parameters of a system of simultaneous regression equations. Econometric Theory 4, 159170. Paper presented to the Copenhagen Meeting of the Econometric Society, 1963.
Engle, R. F. (1982). Autoregressive conditional heteroscedasticity, with estimates of the variance of United Kingdom inflation. Econometrica 50, 9871007.
Engle, R. F. and C. W. J. Granger (1987). Cointegration and error correction: Representation, estimation and testing. Econometrica 55, 251276.
Engle, R. F. and D. F. Hendry (1993). Testing super exogeneity and invariance in regression models. Journal of Econometrics 56, 119139. Reprinted in Ericsson, N. R. and Irons, J. S. (eds.) Testing Exogeneity, Oxford: Oxford University Press, 1994.
Engle, R. F., D. F. Hendry, and J.F. Richard (1983). Exogeneity. Econometrica 51, 277304. Reprinted in Hendry, D. F., Econometrics: Alchemy or Science? Oxford: Blackwell Publishers, 1993, and Oxford University Press, 2000; in Ericsson, N. R. and Irons, J. S. (eds.) Testing Exogeneity, Oxford: Oxford University Press, 1994; and in Campos, J., Ericsson, N.R. and Hendry, D.F. (eds.), General to Specific Modelling. Edward Elgar, 2005.
Engle, R. F., D. F. Hendry, and D. Trumbull (1985). Small sample properties of ARCH estimators and tests. Canadian Journal of Economics 43, 6693.
Ericsson, N. R., D. F. Hendry, and G. E. Mizon (1996). Econometric issues in economic policy analysis. Mimeo, Nuffield College, University of Oxford.
Ericsson, N. R., D. F. Hendry, and H.A. Tran (1994). Cointegration, seasonality, encompassing and the demand for money in the United Kingdom. See Harg94, pp. 179224.
Ericsson, N. R. (1992). Cointegration, exogeneity and policy analysis. Journal of Policy Modeling 14. Special Issue.
Favero, C. and D. F. Hendry (1992). Testing the Lucas critique: A review. Econometric Reviews 11, 265306.
Fletcher, R. (1987). Practical Methods of Optimization, (2nd ed.). New York: John Wiley & Sons.
Gill, P. E., W. Murray, and M. H. Wright (1981). Practical Optimization. New York: Academic Press.
Godfrey, L. G. (1988). Misspecification Tests in Econometrics. Cambridge: Cambridge University Press.
Goldfeld, S. M. and R. E. Quandt (1972). Nonlinear Methods in Econometrics. Amsterdam: NorthHolland.
Granger, C. W. J. (1969). Investigating causal relations by econometric models and crossspectral methods. Econometrica 37, 424438.
Haavelmo, T. (1943). The statistical implications of a system of simultaneous equations. Econometrica 11, 112.
Haavelmo, T. (1944). The probability approach in econometrics. Econometrica 12, 1118. Supplement.
Hansen, H. and S. Johansen (1992). Recursive estimation in cointegrated VARmodels. Discussion paper, Institute of Mathematical Statistics, University of Copenhagen.
Hargreaves, C. (Ed.) (1994). Nonstationary Timeseries Analysis and Cointegration. Oxford: Oxford University Press.
Harvey, A. C. (1990). The Econometric Analysis of Time Series, (2nd ed.). Hemel Hempstead: Philip Allan.
Hendry, D. F. (1971). Maximum likelihood estimation of systems of simultaneous regression equations with errors generated by a vector autoregressive process. International Economic Review 12, 257272. Correction in 15, p.260.
Hendry, D. F. (1976). The structure of simultaneous equations estimators. Journal of Econometrics 4, 5188. Reprinted in Hendry, D. F., Econometrics: Alchemy or Science? Oxford: Blackwell Publishers, 1993, and Oxford University Press, 2000.
Hendry, D. F. (1979). Predictive failure and econometric modelling in macroeconomics: The transactions demand for money. In P. Ormerod (Ed.), Economic Modelling, pp. 217242. London: Heinemann. Reprinted in Hendry, D. F., Econometrics: Alchemy or Science? Oxford: Blackwell Publishers, 1993, and Oxford University Press, 2000; and in Campos, J., Ericsson, N.R. and Hendry, D.F. (eds.), General to Specific Modelling. Edward Elgar, 2005.
Hendry, D. F. (1986). Using PCGIVE in econometrics teaching. Oxford Bulletin of Economics and Statistics 48, 8798.
Hendry, D. F. (1987). Econometric methodology: A personal perspective. In T. F. Bewley (Ed.), Advances in Econometrics, pp. 2948. Cambridge: Cambridge University Press. Reprinted in Campos, J., Ericsson, N.R. and Hendry, D.F. (eds.), General to Specific Modelling. Edward Elgar, 2005.
Hendry, D. F. (1988). The encompassing implications of feedback versus feedforward mechanisms in econometrics. Oxford Economic Papers 40, 132149. Reprinted in Ericsson, N. R. and Irons, J. S. (eds.) Testing Exogeneity, Oxford: Oxford University Press, 1994; and in Campos, J., Ericsson, N.R. and Hendry, D.F. (eds.), General to Specific Modelling. Edward Elgar, 2005.
Hendry, D. F. (1993). Econometrics: Alchemy or Science? Oxford: Blackwell Publishers.
Hendry, D. F. (1995). Dynamic Econometrics. Oxford: Oxford University Press.
Hendry, D. F. (2001). Modelling UK inflation, 18751991. Journal of Applied Econometrics 16, 255275.
Hendry, D. F. and J. A. Doornik (1994). Modelling linear dynamic econometric systems. Scottish Journal of Political Economy 41, 133.
Hendry, D. F. and J. A. Doornik (2013). Empirical Econometric Modelling using PcGive: Volume I (7th ed.). London: Timberlake Consultants Press.
Hendry, D. F. and K. Juselius (2001). Explaining cointegration analysis: Part II. Energy Journal 22, 75120.
Hendry, D. F. and H.M. Krolzig (2003). New developments in automatic generaltospecific modelling. In B. P. Stigum (Ed.), Econometrics and the Philosophy of Economics, pp. 379419. Princeton: Princeton University Press.
Hendry, D. F. and G. E. Mizon (1993). Evaluating dynamic econometric models by encompassing the VAR. In P. C. B. Phillips (Ed.), Models, Methods and Applications of Econometrics, pp. 272300. Oxford: Basil Blackwell. Reprinted in Campos, J., Ericsson, N.R. and Hendry, D.F. (eds.), General to Specific Modelling. Edward Elgar, 2005.
Hendry, D. F. and M. S. Morgan (1995). The Foundations of Econometric Analysis. Cambridge: Cambridge University Press.
Hendry, D. F. and A. J. Neale (1991). A Monte Carlo study of the effects of structural breaks on tests for unit roots. In P. Hackl and A. H. Westlund (Eds.), Economic Structural Change, Analysis and Forecasting, pp. 95119. Berlin: SpringerVerlag.
Hendry, D. F., A. J. Neale, and F. Srba (1988). Econometric analysis of small linear systems using PcFiml. Journal of Econometrics 38, 203226.
Hendry, D. F., A. R. Pagan, and J. D. Sargan (1984). Dynamic specification. In Z. Griliches and M. D. Intriligator (Eds.), Handbook of Econometrics, Volume 2, pp. 10231100. Amsterdam: NorthHolland. Reprinted in Hendry, D. F., Econometrics: Alchemy or Science? Oxford: Blackwell Publishers, 1993, and Oxford University Press, 2000; and in Campos, J., Ericsson, N.R. and Hendry, D.F. (eds.), General to Specific Modelling. Edward Elgar, 2005.
Hendry, D. F. and J.F. Richard (1982). On the formulation of empirical models in dynamic econometrics. Journal of Econometrics 20, 333. Reprinted in Granger, C. W. J. (ed.) (1990), Modelling Economic Series. Oxford: Clarendon Press and in Hendry D. F., Econometrics: Alchemy or Science? Oxford: Blackwell Publishers 1993, and Oxford University Press, 2000; and in Campos, J., Ericsson, N.R. and Hendry, D.F. (eds.), General to Specific Modelling. Edward Elgar, 2005.
Hendry, D. F. and J.F. Richard (1983). The econometric analysis of economic time series (with discussion). International Statistical Review 51, 111163. Reprinted in Hendry, D. F., Econometrics: Alchemy or Science? Oxford: Blackwell Publishers, 1993, and Oxford University Press, 2000.
Hendry, D. F. and J.F. Richard (1989). Recent developments in the theory of encompassing. In B. Cornet and H. Tulkens (Eds.), Contributions to Operations Research and Economics. The XXth Anniversary of CORE, pp. 393440. Cambridge, MA: MIT Press. Reprinted in Campos, J., Ericsson, N.R. and Hendry, D.F. (eds.), General to Specific Modelling. Edward Elgar, 2005.
Hendry, D. F. and F. Srba (1980). AUTOREG: A computer program library for dynamic econometric models with autoregressive errors. Journal of Econometrics 12, 85102. Reprinted in Hendry, D. F., Econometrics: Alchemy or Science? Oxford: Blackwell Publishers, 1993, and Oxford University Press, 2000.
Hosking, J. R. M. (1980). The multivariate portmanteau statistic. Journal of the American Statistical Association 75, 602608.
Hunter, J. (1992). Cointegrating exogeneity. Economics Letters 34, 3335.
Johansen, S. (1988). Statistical analysis of cointegration vectors. Journal of Economic Dynamics and Control 12, 231254. Reprinted in R.F. Engle and C.W.J. Granger (eds), LongRun Economic Relationships, Oxford: Oxford University Press, 1991, 13152.
Johansen, S. (1991). Estimation and hypothesis testing of cointegration vectors in Gaussian vector autoregressive models. Econometrica 59, 15511580.
Johansen, S. (1992a). Cointegration in partial systems and the efficiency of singleequation analysis. Journal of Econometrics 52, 389402.
Johansen, S. (1992b). Testing weak exogeneity and the order of cointegration in UK money demand. Journal of Policy Modeling 14, 313334.
Johansen, S. (1994). The role of the constant and linear terms in cointegration analysis of nonstationary variables. Econometric Reviews 13, 205229.
Johansen, S. (1995a). Identifying restrictions of linear equations with applications to simultaneous equations and cointegration. Journal of Econometrics 69, 111132.
Johansen, S. (1995b). Likelihoodbased Inference in Cointegrated Vector Autoregressive Models. Oxford: Oxford University Press.
Johansen, S. (1995c). A statistical analysis of cointegration for I(2) variables. Econometric Theory 11, 2559.
Johansen, S. and K. Juselius (1990). Maximum likelihood estimation and inference on cointegration  With application to the demand for money. Oxford Bulletin of Economics and Statistics 52, 169210.
Johansen, S. and K. Juselius (1992). Testing structural hypotheses in a multivariate cointegration analysis of the PPP and the UIP for UK. Journal of Econometrics 53, 211244.
Johansen, S. and K. Juselius (1994). Identification of the longrun and the shortrun structure. An application to the ISLM model. Journal of Econometrics 63, 736.
Judge, G. G., W. E. Griffiths, R. C. Hill, H. Lütkepohl, and T.C. Lee (1985). The Theory and Practice of Econometrics, (2nd ed.). New York: John Wiley.
Kelejian, H. H. (1982). An extension of a standard test for heteroskedasticity to a systems framework. Journal of Econometrics 20, 325333.
Kiefer, N. M. (1989). The ET interview: Arthur S. Goldberger. Econometric Theory 5, 133160.
Kiviet, J. F. (1986). On the rigor of some misspecification tests for modelling dynamic relationships. Review of Economic Studies 53, 241261.
Kiviet, J. F. and G. D. A. Phillips (1992). Exact similar tests for unit roots and cointegration. Oxford Bulletin of Economics and Statistics 54, 349367.
Koopmans, T. C. (Ed.) (1950). Statistical Inference in Dynamic Economic Models. Number 10 in Cowles Commission Monograph. New York: John Wiley & Sons.
Ljung, G. M. and G. E. P. Box (1978). On a measure of lack of fit in time series models. Biometrika 65, 297303.
Longley, G. M. (1967). An appraisal of leastsquares for the electronic computer from the point of view of the user. Journal of the American Statistical Association 62, 819841.
Lütkepohl, H. (1991). Introduction to Multiple Time Series Analysis. New York: SpringerVerlag.
Magnus, J. R. and H. Neudecker (1988). Matrix Differential Calculus with Applications in Statistics and Econometrics. New York: John Wiley & Sons.
Majunder, K. L. and G. P. Bhattacharjee (1973a). Algorithm AS 63. The incomplete beta integral. Applied Statistics 22, 409411.
Majunder, K. L. and G. P. Bhattacharjee (1973b). Algorithm AS 64. Inverse of the incomplete beta function ratio. Applied Statistics 22, 411414.
Makridakis, S., S. C. Wheelwright, and R. C. Hyndman (1998). Forecasting: Methods and Applications (3rd ed.). New York: John Wiley and Sons.
Mizon, G. E. (1977). Model selection procedures. In M. J. Artis and A. R. Nobay (Eds.), Studies in Modern Economic Analysis, pp. 97120. Oxford: Basil Blackwell.
Mizon, G. E. and J.F. Richard (1986). The encompassing principle and its application to nonnested hypothesis tests. Econometrica 54, 657678.
Molinas, C. (1986). A note on spurious regressions with integrated moving average errors. Oxford Bulletin of Economics and Statistics 48, 279282.
Mosconi, R. and C. Giannini (1992). Noncausality in cointegrated systems: Representation, estimation and testing. Oxford Bulletin of Economics and Statistics 54, 399417.
Ooms, M. (1994). Empirical Vector Autoregressive Modeling. Berlin: SpringerVerlag.
OsterwaldLenum, M. (1992). A note with quantiles of the asymptotic distribution of the ML cointegration rank test statistics. Oxford Bulletin of Economics and Statistics 54, 461472.
Pagan, A. R. (1987). Three econometric methodologies: A critical appraisal. Journal of Economic Surveys 1, 324. Reprinted in Granger, C. W. J. (ed.) (1990), Modelling Economic Series. Oxford: Clarendon Press.
Pagan, A. R. (1989). On the role of simulation in the statistical evaluation of econometric models. Journal of Econometrics 40, 125139.
Paruolo, P. (1996). On the determination of integration indices in I(2) systems. Journal of Econometrics 72, 313356.
Pesaran, M. H., R. P. Smith, and J. S. Yeo (1985). Testing for structural stability and predictive failure: A review. Manchester School 3, 280295.
Phillips, P. C. B. (1986). Understanding spurious regressions in econometrics. Journal of Econometrics 33, 311340.
Phillips, P. C. B. (1991). Optimal inference in cointegrated systems. Econometrica 59, 283306.
Pike, M. C. and I. D. Hill (1966). Logarithm of the gamma function. Communications of the ACM 9, 684.
Quandt, R. E. (1983). Computational methods and problems. In Z. Griliches and M. D. Intriligator (Eds.), Handbook of Econometrics, Volume 1, Chapter 12. Amsterdam: NorthHolland.
Rahbek, A., H. C. Kongsted, and C. Jørgensen (1999). Trendstationarity in the I(2) cointegration model. Journal of Econometrics 90, 265289.
Rao, C. R. (1952). Advanced Statistical Methods in Biometric Research. New York: John Wiley.
Rao, C. R. (1973). Linear Statistical Inference and its Applications, (2nd ed.). New York: John Wiley & Sons.
Richard, J.F. (1984). Classical and Bayesian inference in incomplete simultaneous equation models. In D. F. Hendry and K. F. Wallis (Eds.), Econometrics and Quantitative Economics. Oxford: Basil Blackwell.
Salkever, D. S. (1976). The use of dummy variables to compute predictions, prediction errors and confidence intervals. Journal of Econometrics 4, 393397.
Schmidt, P. (1974). The asymptotic distribution of forecasts in the dynamic simulation of an econometric model. Econometrica 42, 303309.
Shea, B. L. (1988). Algorithm AS 239: Chisquared and incomplete gamma integral. Applied Statistics 37, 466473.
Shenton, L. R. and K. O. Bowman (1977). A bivariate model for the distribution of √b_{1} and b_{2}. Journal of the American Statistical Association 72, 206211.
Sims, C. A. (1980). Macroeconomics and reality. Econometrica 48, 148. Reprinted in Granger, C. W. J. (ed.) (1990), Modelling Economic Series. Oxford: Clarendon Press.
Spanos, A. (1986). Statistical Foundations of Econometric Modelling. Cambridge: Cambridge University Press.
Spanos, A. (1989). On rereading Haavelmo: A retrospective view of econometric modeling. Econometric Theory 5, 405429.
Thisted, R. A. (1988). Elements of Statistical Computing. Numerical Computation. New York: Chapman and Hall.
Toda, H. Y. and P. C. B. Phillips (1993). Vector autoregressions and causality. Econometrica 61, 13671393.
White, H. (1980). A heteroskedasticconsistent covariance matrix estimator and a direct test for heteroskedasticity. Econometrica 48, 817838.
Wooldridge, J. M. (1999). Asymptotic properties of some specification tests in linear models with integrated processes. In R. F. Engle and H. White (Eds.), Cointegration, Causality and Forecasting, pp. 366384. Oxford: Oxford University Press.