‘Fat big data’ characterise data sets that contain many more variables than observations. We discuss the use of both principal components analysis and equilibrium correction models to identify cointegrating relations that handle stochastic trends in non-stationary fat data. However, most time series are wide-sense non-stationary—induced by the joint occurrence of stochastic trends and distributional shifts—so we also handle the latter by saturation estimation. Seeking substantive relationships when there are vast numbers of potentially spurious connections cannot be achieved by merely choosing the best-fitting equation or trying hundreds of empirical fits and selecting a preferred one, perhaps contradicted by others that go unreported. Conversely, fat big data are useful if they help ensure that the data generation process is nested in the postulated model, and increase the power of specification and mis-specification tests without raising the chances of adventitious significance. We model the monthly UK unemployment rate, using both macroeconomic and Google Trends data, searching across 3000 explanatory variables, yet identify a parsimonious, statistically valid, and theoretically interpretable specification.


Castle, J.L., J.A. Doornik, & Hendry, D.F. (2020). 'Modelling Non-stationary Big Data'. International Journal of Forecasting.
Go to Document