Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
149 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Disentangled Representation with Causal Constraints for Counterfactual Fairness (2208.09147v2)

Published 19 Aug 2022 in cs.LG, cs.AI, and cs.CY

Abstract: Much research has been devoted to the problem of learning fair representations; however, they do not explicitly the relationship between latent representations. In many real-world applications, there may be causal relationships between latent representations. Furthermore, most fair representation learning methods focus on group-level fairness and are based on correlations, ignoring the causal relationships underlying the data. In this work, we theoretically demonstrate that using the structured representations enable downstream predictive models to achieve counterfactual fairness, and then we propose the Counterfactual Fairness Variational AutoEncoder (CF-VAE) to obtain structured representations with respect to domain knowledge. The experimental results show that the proposed method achieves better fairness and accuracy performance than the benchmark fairness methods.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (57)
  1. AI Fairness 360: An Extensible Toolkit for Detecting and Mitigating Algorithmic Bias. IBM Journal of Research and Development 63, 4/5 (2019), 4–1.
  2. Marc Bendick. 2007. Situation Testing for Employment Discrimination in the United States of America. Horizons stratégiques 3 (2007), 17–39.
  3. Kenneth A. Bollen. 1989. Structural Equations with Latent Variables. Wiley.
  4. Evaluating the Predictive Validity of the COMPAS Risk and Needs Assessment System. Criminal Justice and Behavior 36, 1 (2009), 21–40.
  5. Optimized Pre-Processing for Discrimination Prevention. In NeurIPS 2017. 3992–4001.
  6. Alycia N. Carey and Xintao Wu. 2022. The Causal Fairness Field Guide: Perspectives From Social and Formal Sciences. Frontiers Big Data 5 (2022), 892837.
  7. Toward Unique and Unbiased Causal Effect Estimation From Data With Hidden Variables. IEEE Transactions on Neural Networks and Learning Systems (2022), 1–13.
  8. Silvia Chiappa. 2019. Path-Specific Counterfactual Fairness. In AAAI 2019. 7801–7808.
  9. Sam Corbett-Davies and Sharad Goel. 2018. The Measure and Mismeasure of Fairness: A Critical Review of Fair Machine Learning. CoRR abs/1808.00023 (2018). arXiv:1808.00023
  10. Predictive Models for Loan Default Risk Assessment. Economic Computation & Economic Cybernetics Studies & Research 53, 2 (2019).
  11. Flexibly Fair Representation Learning by Disentanglement. In ICML 2019. 1436–1445.
  12. Dheeru Dua and Casey Graff. 2017. UCI Machine Learning Repository. http://archive.ics.uci.edu/ml
  13. Fairness Through Awareness. In Proceedings of the 3rd innovations in theoretical computer science conference, ITCS 2012. 214–226.
  14. Certifying and Removing Disparate Impact. In SIGKDD 2015. 259–268.
  15. Xavier Gitiaux and Huzefa Rangwala. 2021. Learning Smooth and Fair Representations. In AISTATS 2021. 253–261.
  16. Causal Inference in Statistics: A Primer. John Wiley & Sons.
  17. Equality of Opportunity in Supervised Learning. In NeurIPS 2016. 3315–3323.
  18. beta-VAE: Learning Basic Visual Concepts with a Constrained Variational Framework. In ICLR 2017. 1–22.
  19. Markus Kalisch and Peter Bühlmann. 2007. Estimating High-Dimensional Directed Acyclic Graphs with the PC-Algorithm. Journal of Machine Learning Research 8 (2007), 613–636.
  20. Faisal Kamiran and Toon Calders. 2012. Data Preprocessing Techniques for Classification without Discrimination. Knowledge and Information Systems 33, 1 (2012), 1–33.
  21. Hyunjik Kim and Andriy Mnih. 2018. Disentangling by Factorising. In ICML 2018. 2654–2663.
  22. Counterfactual Fairness with Disentangled Causal Effect Variational Autoencoder. In AAAI 2021. 8128–8136.
  23. Crime Analysis Through Machine Learning. In Proceedings of the 9th Annual Information Technology, Electronics and Mobile Communication Conference, IEMCON 2018. 415–420.
  24. Diederik P. Kingma and Max Welling. 2014. Auto-Encoding Variational Bayes. In ICLR 2014. 1–14.
  25. Consumer Credit Risk: Individual Probability Estimates Using Machine Learning. Expert Systems with Applications 40, 13 (2013), 5125–5131.
  26. Counterfactual Fairness. In NeurIPS 2017. 4066–4076.
  27. Algorithmic Bias Detection and Mitigation: Best Practices and Policies to Reduce Consumer Harms. Brookings Institute: Washington, DC, USA (2019).
  28. David Lewis. 2013. Counterfactuals. John Wiley & Sons.
  29. Discrimination Detection by Causal Effect Estimation. In IEEE BigData 2017. 1087–1094.
  30. The Variational Fair Autoencoder. In ICLR 2016. 1–11.
  31. Ryan Mac. 2021. Facebook apologizes after AI Puts ‘primates’ label on video of black men. Retrieved March, 2022 from https://www.nytimes.com/2021/09/03/technology/facebook-ai-race-primates.html
  32. Learning Adversarially Fair and Transferable Representations. In ICML 2018. 3381–3390.
  33. Fairness through Causal Awareness: Learning Causal Latent-Variable Models for Biased Data. In Proceedings of the Conference on Fairness, Accountability, and Transparency, FAT* 2019. 349–358.
  34. A Survey on Bias and Fairness in Machine Learning. Comput. Surveys 54, 6 (2021), 115:1–115:35.
  35. Razieh Nabi and Ilya Shpitser. 2018. Fair Inference on Outcomes. In AAAI 2018. 1931–1940.
  36. Learning Disentangled Representation for Fair Facial Attribute Classification via Fairness-aware Information Alignment. In AAAI 2021. 2403–2411.
  37. PyTorch: An Imperative Style, High-Performance Deep Learning Library. In NeurIPS 2019. 8024–8035.
  38. Judea Pearl. 2009a. Causal Inference in Statistics: An Overview. Statistics Surveys 3 (2009), 96–146.
  39. Judea Pearl. 2009b. Causality. Cambridge University Press.
  40. Thomas Richardson and Peter Spirtes. 2002. Ancestral Graph Markov Models. The Annals of Statistics 30, 4 (2002), 962–1030.
  41. Fairness by Learning Orthogonal Disentangled Representations. In ECCV 2020. 746–761.
  42. Learning Controllable Fair Representations. In AISTATS 2019. 2164–2173.
  43. Causation, Prediction, and Search. MIT press.
  44. Balanced Datasets Are Not Enough: Estimating and Mitigating Gender Bias in Deep Image Representations. In ICCV 2019. IEEE, 5309–5318.
  45. Michael Satosi Watanabe. 1960. Information Theoretical Analysis of Multivariate Correlation. IBM Journal of Research and Development 4, 1 (1960), 66–82.
  46. Linda F Wightman. 1998. LSAC National Longitudinal Bar Passage Study. LSAC Research Report Series. (1998).
  47. Orthogonality-Promoting Distance Metric Learning: Convex Relaxation and Theoretical Analysis. In ICML 2018. 5399–5408.
  48. CausalVAE: Disentangled Representation Learning via Neural Structural Causal Models. In CVPR 2021. 9593–9602.
  49. Diversity Regularized Machine. In IJCAI 2011. 1603–1608.
  50. Fairness Beyond Disparate Treatment & Disparate Impact: Learning Classification without Disparate Mistreatment. In WWW 2017. 1171–1180.
  51. Learning Fair Representations. In ICML 2013. 325–333.
  52. Junzhe Zhang and Elias Bareinboim. 2018. Fairness in Decision-Making - The Causal Explanation Formula. In AAAI 2018. 2037–2045.
  53. Lu Zhang and Xintao Wu. 2017. Anti-discrimination Learning: A Causal Modeling-based Framework. International Journal of Data Science and Analytics 4, 1 (2017), 1–16.
  54. Achieving Non-Discrimination in Data Release. In SIGKDD 2017. 1335–1344.
  55. A Causal Framework for Discovering and Removing Direct and Indirect Discrimination. In IJCAI 2017. 3929–3935.
  56. Maggie Zhang. 2015. Google photos tags two african-americans as gorillas through facial recognition software. Retrieved March, 2022 from https://www.forbes.com/sites/mzhang/2015/07/01/google-photos-tags-two-african-americans-as-gorillas-through-facial-recognition-software.html
  57. DAGs with NO TEARS: Continuous Optimization for Structure Learning. In NeurIPS 2018. 9492–9503.
Citations (7)

Summary

We haven't generated a summary for this paper yet.