Description

Robert L Axtell

Chair (on leave), Department of Computational Social Science, Krasnow Institute for Advanced Study, George Mason University

ABSTRACT:

As a methodology, agent?based modeling and simulation is a youthful member of the social scientist's quiver of tools and techniques. The first systematic use of agents by social scientists began appearing some 20 years ago, largely driven by new ideas concerning 'complex systems' that were blossoming at the Santa Fe Institute. By every metric the field has exploded since then, with the number of publications, conferences, citations, and research expenditures all increasing exponentially. This rapid adoption has been driven by several factors, including the growth of computing power, increasing ease of software development as specialized frameworks appear, progressive availability of micro?data on individual people suitable for calibrating agent models, and advances in closely related fields, e.g., network science. However, as with any young methodology, there is today tremendous variance in the design, implementation, exercise, and ultimate use of agent computing across the social sciences, both positively (descriptively) and normatively (for policy purposes). In this talk I will describe some past successes of the approach, highlight important work that is underway, and flag apparent bottlenecks to the continuing expansion of agent approaches within the academic and policy communities. I will advance three arguments: (1) Unlike our colleagues in the natural sciences, there is no way for social scientists to leverage all the capabilities of modern computing architectures numerically using equation?based models, since we typically do not use spatially?extended systems, and therefore the only way to fill up terabytes of memory is via large numbers of cognitively?rich agents; (2) The new availability of 'administratively comprehensive' data??ALL the data in relevant domains??implies a radically modified role for statistics and econometrics, as previous focus on sampling issues and moment methods are irrelevant when all distributions are known empirically; (3) The social sciences are inherently multi?level??behavior at the individual level accretes to produce aggregate phenomena that then feeds back to the agent level??and extant mathematical techniques for working across levels in the physical sciences and engineering (e.g., statistical mechanics, model aggregation theory) do not work well for social systems characterized by heterogeneous 'components', leaving computational approaches as the only way forward if we are really to understand how two people in a garage can grow into a $500 billion company and why 'novel' mortgages can create $20 trillion of financial losses.

Venue

Research Programmes