Papers
Topics
Authors
Recent
Detailed Answer
Quick Answer
Concise responses based on abstracts only
Detailed Answer
Well-researched responses based on abstracts and relevant paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses
Gemini 2.5 Flash
Gemini 2.5 Flash 44 tok/s
Gemini 2.5 Pro 41 tok/s Pro
GPT-5 Medium 13 tok/s Pro
GPT-5 High 15 tok/s Pro
GPT-4o 86 tok/s Pro
Kimi K2 208 tok/s Pro
GPT OSS 120B 447 tok/s Pro
Claude Sonnet 4 36 tok/s Pro
2000 character limit reached

FairICP: Encouraging Equalized Odds via Inverse Conditional Permutation (2404.05678v4)

Published 8 Apr 2024 in stat.ML, cs.CY, and cs.LG

Abstract: $\textit{Equalized odds}$, an important notion of algorithmic fairness, aims to ensure that sensitive variables, such as race and gender, do not unfairly influence the algorithm's prediction when conditioning on the true outcome. Despite rapid advancements, current research primarily focuses on equalized odds violations caused by a single sensitive attribute, leaving the challenge of simultaneously accounting for multiple attributes under-addressed. We bridge this gap by introducing an in-processing fairness-aware learning approach, FairICP, which integrates adversarial learning with a novel inverse conditional permutation scheme. FairICP offers a flexible and efficient scheme to promote equalized odds under fairness conditions described by complex and multi-dimensional sensitive attributes. The efficacy and adaptability of our method are demonstrated through both simulation studies and empirical analyses of real-world datasets.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (31)
  1. A reductions approach to fair classification. In International conference on machine learning, pp.  60–69. PMLR.
  2. It’s compaslicated: The messy relationship between rai datasets and algorithmic fairness benchmarks.
  3. The conditional permutation test for independence while controlling for confounders. Journal of the Royal Statistical Society Series B: Statistical Methodology 82(1), 175–197.
  4. Panning for gold: Model-X knockoffs for high-dimensional controlled variable selection. Journal of the Royal Statistical Society: Series B 80(3), 551–577.
  5. A clarification of the nuances in the fairness metrics landscape. Scientific Reports 12(1), 4209.
  6. A fair classifier using kernel density estimation. Advances in neural information processing systems 33, 15088–15099.
  7. Flexibly fair representation learning by disentanglement. In K. Chaudhuri and R. Salakhutdinov (Eds.), Proceedings of the 36th International Conference on Machine Learning, Volume 97 of Proceedings of Machine Learning Research, pp.  1436–1445. PMLR.
  8. Fairness through awareness. In Proceedings of the 3rd Innovations in Theoretical Computer Science Conference, pp.  214––226.
  9. Certifying and removing disparate impact. In Proceedings of the 21st ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp.  259–268.
  10. Generative adversarial nets. In Advances in Neural Information Processing Systems 27, pp. 2672–2680.
  11. Equality of opportunity in supervised learning. Advances in neural information processing systems 29.
  12. Multicalibration: Calibration for the (Computationally-identifiable) masses. In J. Dy and A. Krause (Eds.), Proceedings of the 35th International Conference on Machine Learning, Volume 80 of Proceedings of Machine Learning Research, pp.  1939–1948. PMLR.
  13. Kernel partial correlation coefficient — a measure of conditional dependence. Journal of Machine Learning Research 23(216), 1–58.
  14. Fairness-aware learning through regularization approach. In 2011 IEEE 11th International Conference on Data Mining Workshops, pp.  643–650. IEEE.
  15. Preventing fairness gerrymandering: Auditing and learning for subgroup fairness. In J. Dy and A. Krause (Eds.), Proceedings of the 35th International Conference on Machine Learning, Volume 80 of Proceedings of Machine Learning Research, pp.  2564–2572. PMLR.
  16. Keener, R. W. (2010). Theoretical statistics: Topics for a core course. Springer.
  17. Multiaccuracy: Black-box post-processing for fairness in classification.
  18. Counterfactual fairness. In Advances in Neural Information Processing Systems 30, pp. 4066–4076.
  19. Fairness-aware learning for continuous attributes and treatments. In International Conference on Machine Learning, pp. 4382–4391.
  20. A survey on bias and fairness in machine learning. ACM computing surveys (CSUR) 54(6), 1–35.
  21. Naaman, M. (2021). On the tight constant in the multivariate dvoretzky–kiefer–wolfowitz inequality. Statistics & Probability Letters 173, 109088.
  22. Masked autoregressive flow for density estimation. Advances in neural information processing systems 30.
  23. Achieving equalized odds by resampling sensitive attributes. Advances in neural information processing systems 33, 361–371.
  24. Scott, D. W. (1991). Feasibility of multivariate density estimates. Biometrika 78(1), 197–205.
  25. Attainability and optimality: The equalized odds fairness revisited. In Conference on Causal Learning and Reasoning, pp.  754–786. PMLR.
  26. The holdout randomization test for feature selection in black box models. Journal of Computational and Graphical Statistics 31(1), 151–162.
  27. Algorithmic fairness and bias mitigation for clinical machine learning: Insights from rapid covid-19 diagnosis by adversarial learning. medRxiv.
  28. Fairness beyond disparate treatment & disparate impact: Learning classification without disparate mistreatment. In Proceedings of the 26th International Conference on World Wide Web, pp.  1171––1180.
  29. Fairness constraints: A flexible approach for fair classification. Journal of Machine Learning Research 20(75), 1–42.
  30. Learning fair representations. In International conference on machine learning, pp. 325–333. PMLR.
  31. Mitigating unwanted biases with adversarial learning. In Proceedings of the 2018 AAAI/ACM Conference on AI, Ethics, and Society, pp.  335–340.

Summary

We haven't generated a summary for this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

Lightbulb On Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.

Authors (2)

X Twitter Logo Streamline Icon: https://streamlinehq.com