Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 148 tok/s
Gemini 2.5 Pro 44 tok/s Pro
GPT-5 Medium 27 tok/s Pro
GPT-5 High 38 tok/s Pro
GPT-4o 85 tok/s Pro
Kimi K2 210 tok/s Pro
GPT OSS 120B 442 tok/s Pro
Claude Sonnet 4.5 36 tok/s Pro
2000 character limit reached

Robust Counterfactual Explanations in Machine Learning: A Survey (2402.01928v1)

Published 2 Feb 2024 in cs.LG and cs.AI

Abstract: Counterfactual explanations (CEs) are advocated as being ideally suited to providing algorithmic recourse for subjects affected by the predictions of machine learning models. While CEs can be beneficial to affected individuals, recent work has exposed severe issues related to the robustness of state-of-the-art methods for obtaining CEs. Since a lack of robustness may compromise the validity of CEs, techniques to mitigate this risk are in order. In this survey, we review works in the rapidly growing area of robust CEs and perform an in-depth analysis of the forms of robustness they consider. We also discuss existing solutions and their limitations, providing a solid foundation for future developments.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (58)
  1. Evaluating robustness of counterfactual explanations. In IEEE SSCI, 2021.
  2. Selective ensembles for consistent predictions. In ICLR, 2022.
  3. Model multiplicity: Opportunities, concerns, and solutions. In FAccT, 2022.
  4. Consistent counterfactuals for deep models. In ICLR, 2022.
  5. Leo Breiman. Statistical modeling: The two cultures. Statistical science, 16(3):199–231, 2001.
  6. Counterfactual plans under distributional ambiguity. In ICLR, 2022.
  7. Coverage-validity-aware algorithmic recourse. arXiv:2311.11349, 2023.
  8. Characterizing fairness over the set of good models under selective labels. In ICML, 2021.
  9. Using the k-associated optimal graph to provide counterfactual explanations. In FUZZ-IEEE, 2022.
  10. On the adversarial robustness of causal algorithmic recourse. In ICML, 2022.
  11. Robust counterfactual explanations for tree-based ensembles. In ICML, 2022.
  12. Robustness implies fairness in causal algorithmic recourse. In FAccT, 2023.
  13. The robustness of counterfactual explanations over time. IEEE Access, 10:82736–82750, 2022.
  14. Setting the right expectations: Algorithmic recourse over time. In EAAMO, 2023.
  15. Robust counterfactual explanations for random forests. arXiv:2205.14116, 2022.
  16. Rocoursenet: Robust training of a prediction aware recourse model. In CIKM, 2023.
  17. Equalizing recourse across groups. arXiv:1909.03166, 2019.
  18. Generating robust counterfactual explanations. In ECML PKDD, 2023.
  19. Exploring counterfactual explanations for classification and regression trees. In ECML PKDD Workshops, 2021.
  20. Robust counterfactual explanations for neural networks with probabilistic guarantees. In ICML, 2023.
  21. Leif Hancox-Li. Robustness in machine learning explanations: does it matter? In FAT*, 2020.
  22. Provably robust and plausible counterfactual explanations for neural networks via robust optimisation. In ACML, 2023.
  23. Formalising the robustness of counterfactual explanations for neural networks. In AAAI, 2023.
  24. Recourse under model multiplicity via argumentative ensembling. In AAMAS, 2024.
  25. Counterfactual explanation with missing values. arXiv:2304.14606, 2023.
  26. A survey of algorithmic recourse: Contrastive explanations and consequential recommendations. ACM CSUR, 55(5):1–29, 2022.
  27. On the impact of adversarially robust models on algorithmic recourse. In NeurIPS Workshops, 2022.
  28. Towards bridging the gaps between the right to explanation and the right to be forgotten. In ICML, 2023.
  29. Robust explanations for human-neural multi-agent systems with formal verification. In EUMAS, 2023.
  30. Promoting counterfactual robustness through diversity. In AAAI, 2024.
  31. Counterfactual explanations and model multiplicity: a relational verification view. In KR, 2023.
  32. Finding regions of counterfactual explanations via robust optimization. arXiv:2301.11113, 2023.
  33. Predictive multiplicity in classification. In ICML, 2020.
  34. A survey on the robustness of feature importance and counterfactual explanations. arXiv:2111.00358, 2021.
  35. Robust explanations for private support vector machines. arXiv:2102.03785, 2021.
  36. Scaling guarantees for nearest counterfactual explanations. In AIES, 2021.
  37. Robust bayesian recourse. In UAI, 2022.
  38. Distributionally robust recourse action. In ICLR, 2023.
  39. On counterfactual explanations under predictive multiplicity. In UAI, 2020.
  40. CARLA: A python library to benchmark algorithmic recourse and counterfactual explanation algorithms. In NeurIPS Datasets and Benchmarks, 2021.
  41. Exploring counterfactual explanations through the lens of adversarial examples: A theoretical and empirical analysis. In AISTATS, 2022.
  42. Probabilistically robust recourse: Navigating the trade-offs between costs and robustness in algorithmic recourse. In ICLR, 2023.
  43. On the trade-off between actionable explanations and the right to be forgotten. In ICLR, 2023.
  44. Bayesian hierarchical models for counterfactual estimation. In AISTATS, 2023.
  45. Algorithmic recourse in the wild: Understanding the impact of data and model shifts. arXiv:2012.11788, 2020.
  46. "why should I trust you?": Explaining the predictions of any classifier. In KDD, 2016.
  47. Cynthia Rudin. Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat. Mach. Intell., 1(5):206–215, 2019.
  48. CERTIFAI: A common framework to provide explanations and analyse the fairness and robustness of black-box models. In AIES, 2020.
  49. FASTER-CE: fast, sparse, transparent, and robust counterfactual explanations. arXiv:2210.06578, 2022.
  50. Counterfactual explanations can be manipulated. In NeurIPS, 2021.
  51. Interpretable predictions of tree-based ensembles via actionable feature tweaking. In KDD, 2017.
  52. Towards robust and reliable algorithmic recourse. In NeurIPS, 2021.
  53. On the robustness of sparse counterfactual explanations to adverse perturbations. Artif. Intell., 316:103840, 2023.
  54. On the fairness of causal algorithmic recourse. In AAAI, 2022.
  55. Counterfactual explanations without opening the black box: Automated decisions and the GDPR. Harv. JL & Tech., 31:841, 2017.
  56. T-COL: generating counterfactual explanations for general user preferences on variable machine learning systems. arXiv:2309.16146, 2023.
  57. Flexible and robust counterfactual explanations with minimal satisfiable perturbations. In CIKM, 2023.
  58. Density-based reliable and robust explainer for counterfactual explanation. Expert Syst. Appl., 226:120214, 2023.
Citations (8)

Summary

We haven't generated a summary for this paper yet.

Dice Question Streamline Icon: https://streamlinehq.com

Open Questions

We haven't generated a list of open questions mentioned in this paper yet.

Lightbulb Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

X Twitter Logo Streamline Icon: https://streamlinehq.com

Tweets

This paper has been mentioned in 2 tweets and received 17 likes.

Upgrade to Pro to view all of the tweets about this paper:

Don't miss out on important new AI/ML research

See which papers are being discussed right now on X, Reddit, and more:

“Emergent Mind helps me see which AI papers have caught fire online.”

Philip

Philip

Creator, AI Explained on YouTube