Abstract:

We review key stages in the development of general-to-specific modelling (Gets). Selecting a simplified model from a more general specification was initially implemented manually, then through computer programs to its present automated machine learning role to discover a viable empirical model. Throughout, Gets applications faced many criticisms, especially from accusations of ‘data mining’—no longer pejorative—with other criticisms based on misunderstandings of the methodology, all now rebutted. A prior theoretical formulation can be retained unaltered while searching over more variables than the available sample size from non-stationary data to select congruent, encompassing relations with invariant parameters on valid conditioning variables.

Financial support from Nuffield College is gratefully acknowledged as are valuable comments on the paper by Anindya Banerjee, Jennie Castle, Jurgen Doornik, Neil Ericsson and Andrew Martinez. Many important contributions to research on Gets were made by (inter alia) Julia Campos, Jennifer Castle, Jurgen Doornik, Rob Engle, Neil Ericsson, Kevin Hoover, Søren Johansen, Katarina Juselius, Hans-Martin Krolzig, Grayham Mizon, Bent Nielsen, Peter Phillips, Felix Pretis, Jean François Richard and Aris Spanos. All calculations and graphs use PcGive and OxMetrics.

Citation:

Hendry, D.F. (2023). 'A Brief History of General-to-specific Modelling', Oxford Bulletin of Economics and Statistics, 86, 1 (2024) 0305-9049, https://doi.org/10.1111/obes.12578.
Go to Document

Authors

Research Themes

Research Programmes

Type