Emergent Mind

Abstract

We propose the use of Agent Based Models (ABMs) inside a reinforcement learning framework in order to better understand the relationship between automated decision making tools, fairness-inspired statistical constraints, and the social phenomena giving rise to discrimination towards sensitive groups. There have been many instances of discrimination occurring due to the applications of algorithmic tools by public and private institutions. Until recently, these practices have mostly gone unchecked. Given the large-scale transformation these new technologies elicit, a joint effort of social sciences and machine learning researchers is necessary. Much of the research has been done on determining statistical properties of such algorithms and the data they are trained on. We aim to complement that approach by studying the social dynamics in which these algorithms are implemented. We show how bias can be accumulated and reinforced through automated decision making, and the possibility of finding a fairness inducing policy. We focus on the case of recidivism risk assessment by considering simplified models of arrest. We find that if we limit our attention to what is observed and manipulated by these algorithmic tools, we may determine some blatantly unfair practices as fair, illustrating the advantage of analyzing the otherwise elusive property with a system-wide model. We expect the introduction of agent based simulation techniques will strengthen collaboration with social scientists, arriving at a better understanding of the social systems affected by technology and to hopefully lead to concrete policy proposals that can be presented to policymakers for a true systemic transformation.

We're not able to analyze this paper right now due to high demand.

Please check back later (sorry!).

Generate a summary of this paper on our Pro plan:

We ran into a problem analyzing this paper.

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.