Emergent Mind

Abstract

Digital ads on social-media platforms play an important role in shaping access to economic opportunities. Our work proposes and implements a new third-party auditing method that can evaluate racial bias in the delivery of ads for education opportunities. Third-party auditing is important because it allows external parties to demonstrate presence or absence of bias in social-media algorithms. Education is a domain with legal protections against discrimination and concerns of racial-targeting, but bias induced by ad delivery algorithms has not been previously explored in this domain. Prior audits demonstrated discrimination in platforms' delivery of ads to users for housing and employment ads. These audit findings supported legal action that prompted Meta to change their ad-delivery algorithms to reduce bias, but only in the domains of housing, employment, and credit. In this work, we propose a new methodology that allows us to measure racial discrimination in a platform's ad delivery algorithms for education ads. We apply our method to Meta using ads for real schools and observe the results of delivery. We find evidence of racial discrimination in Meta's algorithmic delivery of ads for education opportunities, posing legal and ethical concerns. Our results extend evidence of algorithmic discrimination to the education domain, showing that current bias mitigation mechanisms are narrow in scope, and suggesting a broader role for third-party auditing of social media in areas where ensuring non-discrimination is important.

Partial screenshot of Meta's Ads Manager showing aggregate location data for ad recipients.

Overview

  • The paper introduces a novel methodology for auditing racial discrimination in the delivery of education ads by Meta’s algorithm, focusing on pairs of education institutions with distinct historical biases in student demographics.

  • The researchers' experiments revealed significant racial biases in ad delivery, notably that for-profit college ads were disproportionately shown to Black individuals, even when using neutral ad creatives.

  • The study underscores the need for digital platforms like Meta to enhance their bias mitigation practices, highlighting the broader implications of algorithmic bias in perpetuating social inequities.

Auditing for Racial Discrimination in the Delivery of Education Ads

The paper "Auditing for Racial Discrimination in the Delivery of Education Ads" by Basileal Imana, Aleksandra Korolova, and John Heidemann, explores the racial biases present in digital advertising systems, particularly focusing on education ads delivered via Meta’s algorithm. This paper not only introduces a novel methodology for auditing discrimination in the algorithm's delivery of education ads but also applies this methodology to uncover concrete evidence of racial biases that undermine equitable access to educational opportunities.

Methodology and Experimental Design

The researchers introduced a new third-party auditing method designed to evaluate racial bias specifically in the delivery of education ads. The novelty of their approach lies in the selection of pairs of education institutions with distinct historical biases in student demographics. For-profit colleges, which historically have a higher proportion of Black students, were paired with public colleges, typically having a higher proportion of White students. By using voter registration data from states such as North Carolina and Florida, the team constructed ad audiences that uniquely map user locations to race, facilitating precise checks for racial skew in ad delivery.

Key Findings and Results

The experimental results are both robust and telling:

  • Neutral Ad Creatives: The study first employed neutral ad creatives to control for possible effects due to creative choices. Six experiments with neutral creatives showed that for-profit school ads were delivered to a higher percentage of Black individuals compared to public school ads, with statistical significance found in the majority of the experiments. The findings under neutral conditions highlight inherent biases embedded within the ad delivery algorithms.
  • Realistic Ad Creatives: When realistic ad creatives from actual school advertisements were used, the racial skew in ad delivery was amplified. This outcome aligns with prior work suggesting that visuals, such as images of faces, can significantly influence ad delivery. All experimental pairs showed statistically significant bias, underscoring the role of both ad content and platform algorithms in shaping exposure disproportionately.
  • Predatory Practices: Expanding their scope, the researchers further tested ads from for-profit colleges previously fined for predatory practices. Again, they found that ads for these institutions were delivered disproportionately to Black individuals, raising significant ethical and legal concerns.

Implications

Practical Implications: The findings suggest that Meta’s ad delivery algorithms potentially perpetuate and even amplify historical racial biases in educational advertising. Given the critical role of education in shaping long-term personal and professional trajectories, this skew in ad delivery can perpetuate broader social inequities. The evidence directs attention towards the need for platforms like Meta to expand their bias mitigation mechanisms beyond housing, employment, and credit ads to include education.

Theoretical Implications: This study broadens our understanding of algorithmic bias, highlighting the need for comprehensive frameworks that address biases across various sectors. It confirms the hypothesis that biases in training data can propagate through machine learning algorithms, leading to discriminatory outcomes.

Future Developments: These findings underscore an urgent call for platforms to integrate more transparent and equitable practices in their ad delivery systems. Future research may explore similar biases in other domains such as healthcare, insurance, and public accommodations. There is also a need for platforms to allow more access to independent researchers to scrutinize their algorithms comprehensively.

Conclusion

This paper contributes significantly to the body of knowledge surrounding algorithmic fairness and discrimination, particularly in the context of education. It provides convincing empirical evidence that digital platforms' ad delivery systems can perpetuate racial biases, potentially influencing life opportunities in biased ways. These insights call for revisiting and restructuring existing auditing frameworks to foster fairer and more transparent algorithmic systems, ensuring non-discrimination across all critical domains.

Create an account to read this summary for free:

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.