Comparisons between alternative scenarios are used in many disciplines, from macroeconomics through epidemiology to climate science, to help with planning future responses. Differences between scenario paths are often interpreted as signifying likely differences between outcomes that would materialise in reality. However, even when using correctly specified statistical models of the in-sample data generation process, additional conditions are needed to sustain inferences about differences between scenario paths. We consider two questions in scenario analyses: First, does testing the difference between scenarios yield additional insight beyond simple tests conducted on the model estimated in-sample? Second, when does the estimated scenario difference yield unbiased estimates of the true difference in outcomes? Answering the first question, we show that the calculation of uncertainties around scenario differences raises difficult issues, since the underlying in-sample distributions are identical for both ‘potential’ outcomes when the reported paths are deterministic functions. Under these circumstances, a scenario comparison adds little beyond testing for the significance of the perturbed variable in the estimated model. Resolving the second question, when models include multiple covariates, inferences about scenario differences depend on the relationships between the conditioning variables, especially their invariance to the interventions being implemented. Tests for invariance based on the automatic detection of structural breaks can help identify the in-sample invariance of models to evaluate likely constancy in projected scenarios. Applications of scenario analyses to impacts on the UK’s wage share from unemployment and agricultural growth from climate change illustrate the concepts.
Hendry, D.F. & Pretis, F. (2022) "Analysing differences between scenarios", International Journal of Forecasting, Open Access.