The predictive accuracy or observational tracking of models may vary over time and classification of this time-varying accuracy can be used to identify unique periods of model failure (e.g. the “warming hiatus” in climate models). Here I propose a formal method to classify time-varying forecast accuracy based on step-indicator saturation by determining the magnitude and timing of breaks in the intercept of dynamic and static models of prediction errors (or prediction error loss differential for competing models). More generally, I derive the approximate variance of the coefficient path - the regimes of the intercept in indicator saturation models - which allows for hypothesis tests on the time-varying mean in models selected using indicators.Using the approximate normal distribution of the error terms enables the computation of the probability of a true underlying break falling in any specified interval around a detected break. This makes it possible to provide approximate confidence measures on the timing of detected breaks, and thus, to conduct hypothesis tests on break dates - a feature invaluable when attributing detected shifts (or forecast failures) to known events, from policy interventions in economics, to shocks in climate change.