Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Debiasing Neural Retrieval via In-batch Balancing Regularization (2205.09240v1)

Published 18 May 2022 in cs.IR, cs.AI, and cs.CY

Abstract: People frequently interact with information retrieval (IR) systems, however, IR models exhibit biases and discrimination towards various demographics. The in-processing fair ranking methods provide a trade-offs between accuracy and fairness through adding a fairness-related regularization term in the loss function. However, there haven't been intuitive objective functions that depend on the click probability and user engagement to directly optimize towards this. In this work, we propose the In-Batch Balancing Regularization (IBBR) to mitigate the ranking disparity among subgroups. In particular, we develop a differentiable \textit{normed Pairwise Ranking Fairness} (nPRF) and leverage the T-statistics on top of nPRF over subgroups as a regularization to improve fairness. Empirical results with the BERT-based neural rankers on the MS MARCO Passage Retrieval dataset with the human-annotated non-gendered queries benchmark \citep{rekabsaz2020neural} show that our IBBR method with nPRF achieves significantly less bias with minimal degradation in ranking performance compared with the baseline.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Yuantong Li (11 papers)
  2. Xiaokai Wei (14 papers)
  3. Zijian Wang (99 papers)
  4. Shen Wang (111 papers)
  5. Parminder Bhatia (50 papers)
  6. Xiaofei Ma (31 papers)
  7. Andrew Arnold (14 papers)
Citations (4)

Summary

We haven't generated a summary for this paper yet.