Empirical evidence in much of economics focuses on quantifying the parameters of a preconceived theory. That works well when the theory is complete, correct and immutable, but not when it is incomplete, incorrect or changing especially if unanticipated breaks perturb the assumed relationships. In practice, the available data must be used to discover what matters empirically: which variables are relevant; their dynamic reactions; the functional forms of connections; detecting multiple breaks and evolving distributions; tackling simultaneity and exogeneity; and modelling expectations. As economic data series are highly inter-correlated all those influences must be tackled jointly. However, a key feature of the approach we have developed is that theory-relevant variables can be retained without selection while selecting over other candidate variables, some of which may even be endogenous. Under the null hypothesis that the candidate variables are irrelevant, by orthogonalizing them with respect to the theory-relevant variables, the estimator distributions of the theory parameters are unaffected by selection—even when selecting from more candidate variables than observations. Under the alternative, that some of the additional candidate variables are relevant, when the initial general model nests the generating process, an improved outcome results from selection. Our modelling approach, supported by stringent model evaluation techniques, tackles all the complications of ‘real world’ economies, to provide a viable framework for empirical modelling that avoids many of the difficulties faced by ‘conventional’ approaches.
People: David Hendry, Bent Nielsen, Jennifer Castle, Jurgen Doornik, Vanessa Berenguer-Rico and James Duffy