Emergent Mind

Abstract

Colleges and universities are increasingly turning to algorithms that predict college-student success to inform various decisions, including those related to admissions, budgeting, and student-success interventions. Because predictive algorithms rely on historical data, they capture societal injustices, including racism. A model that includes racial categories may predict that racially minoritized students will have less favorable outcomes. In this study, we explore bias in education data by modeling bachelor's degree attainment using various machine-learning modeling approaches. We also evaluate the utility of leading bias-mitigating techniques in addressing unfairness. Using nationally representative data from the Education Longitudinal Study of 2002, we demonstrate how models incorporating commonly used features to predict college-student success produce racially biased results.

We're not able to analyze this paper right now due to high demand.

Please check back later (sorry!).

Generate a summary of this paper on our Pro plan:

We ran into a problem analyzing this paper.

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.