Emergent Mind

Fair Decisions Despite Imperfect Predictions

(1902.02979)
Published Feb 8, 2019 in cs.LG , cs.CY , and stat.ML

Abstract

Consequential decisions are increasingly informed by sophisticated data-driven predictive models. However, to consistently learn accurate predictive models, one needs access to ground truth labels. Unfortunately, in practice, labels may only exist conditional on certain decisionsif a loan is denied, there is not even an option for the individual to pay back the loan. Hence, the observed data distribution depends on how decisions are being made. In this paper, we show that in this selective labels setting, learning a predictor directly only from available labeled data is suboptimal in terms of both fairness and utility. To avoid this undesirable behavior, we propose to directly learn decision policies that maximize utility under fairness constraints and thereby take into account how decisions affect which data is observed in the future. Our results suggest the need for a paradigm shift in the context of fair machine learning from the currently prevalent idea of simply building predictive models from a single static dataset via risk minimization, to a more interactive notion of "learning to decide". In particular, such policies should not entirely neglect part of the input space, drawing connections to explore/exploit tradeoffs in reinforcement learning, data missingness, and potential outcomes in causal inference. Experiments on synthetic and real-world data illustrate the favorable properties of learning to decide in terms of utility and fairness.

We're not able to analyze this paper right now due to high demand.

Please check back later (sorry!).

Generate a summary of this paper on our Pro plan:

We ran into a problem analyzing this paper.

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.