This chapter discusses the regulation of artificial intelligence (AI) from the vantage point of political economy. By “political economy” I mean a perspective which emphasizes that there are different people and actors in society who have divergent interests and unequal access to resources and power. By “artificial intelligence” I mean the construction of autonomous systems that maximize some notion of reward. The construction of such systems typically draws on the tools of machine learning and optimization.
AI and machine learning are used in an ever wider array of socially consequential settings. This includes labor markets, education, criminal justice, health, banking, housing, as well as the curation of information by search engines, social networks, and recommender systems. There is a need for public debates about desirable directions of technical innovation, the use of technologies, and constraints to be imposed on technologies. In this chapter, I review some frameworks to help structure such debates.
The discussion in this chapter is opinionated and based on the following premises:
- AI concerns the construction of systems which maximize a measurable objective (reward). Such systems take data as an input, and produce chosen actions as an output.
- Maximization of a singular objective by autonomous systems is taking place in a social world where different individuals have divergent objectives. These divergent objectives might stand in conflict. Evaluated in terms of these divergent objectives, the actions and policies chosen by AI systems (almost) always generate winners and losers.
- Going from individual-level assessments of gains and losses to society-level assessments requires aggregation, which trades off gains and losses across individuals. In order to normatively evaluate AI, as well as proposed regulations, we need to explicitly assess the resulting individual gains and losses, and explicitly aggregate these gains and losses across individuals.
- The social issues raised by AI, including questions of fairness, privacy, value alignment, accountability, and automation, can only be resolved through democratic control of algorithm objectives, and of the means to obtain them - data and computational infrastructure. Democratic control requires public debate and binding collective decision-making, at many different levels of society.
My discussion draws on concepts and references from machine learning, economics, and social choice theory. I touch on several debates regarding the ethics and social impact of artificial intelligence, without any pretension of doing justice to the vast and growing literature on these topics; instead my goal is to give an internally coherent and principled account.