Agent based models often generate estimates about multiple features. Model selection and calibration involve comparing these features to what we observe from data.
It is often difficult a priori to know the trade-off in improving the fit for one feature at the expense of others.
Here I show that we can use simple statistical tools to inform this tradeoff and how it improves our ability to both parametrise our models and discriminate between alternative hypotheses.
Here I show that we can use simple statistical tools to inform this tradeoff and how it improves our ability to both parametrise our models and discriminate between alternative hypotheses.