Papers
Topics
Authors
Recent
Detailed Answer
Quick Answer
Concise responses based on abstracts only
Detailed Answer
Well-researched responses based on abstracts and relevant paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses
Gemini 2.5 Flash
Gemini 2.5 Flash 72 tok/s
Gemini 2.5 Pro 57 tok/s Pro
GPT-5 Medium 43 tok/s Pro
GPT-5 High 23 tok/s Pro
GPT-4o 107 tok/s Pro
Kimi K2 219 tok/s Pro
GPT OSS 120B 465 tok/s Pro
Claude Sonnet 4 39 tok/s Pro
2000 character limit reached

CFGs: Causality Constrained Counterfactual Explanations using goal-directed ASP (2405.15956v1)

Published 24 May 2024 in cs.AI, cs.LG, and cs.LO

Abstract: Machine learning models that automate decision-making are increasingly used in consequential areas such as loan approvals, pretrial bail approval, and hiring. Unfortunately, most of these models are black boxes, i.e., they are unable to reveal how they reach these prediction decisions. A need for transparency demands justification for such predictions. An affected individual might also desire explanations to understand why a decision was made. Ethical and legal considerations require informing the individual of changes in the input attribute (s) that could be made to produce a desirable outcome. Our work focuses on the latter problem of generating counterfactual explanations by considering the causal dependencies between features. In this paper, we present the framework CFGs, CounterFactual Generation with s(CASP), which utilizes the goal-directed Answer Set Programming (ASP) system s(CASP) to automatically generate counterfactual explanations from models generated by rule-based machine learning algorithms in particular. We benchmark CFGs with the FOLD-SE model. Reaching the counterfactual state from the initial state is planned and achieved using a series of interventions. To validate our proposal, we show how counterfactual explanations are computed and justified by imagining worlds where some or all factual assumptions are altered/changed. More importantly, we show how CFGs navigates between these worlds, namely, go from our initial state where we obtain an undesired outcome to the imagined goal state where we obtain the desired decision, taking into account the causal relationships among features.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (25)
  1. Modeling and reasoning in event calculus using goal-directed constraint answer set programming.
  2. Constraint Answer Set Programming without Grounding.
  3. Baral, C. 2003. Knowledge representation, reasoning and declarative problem solving. Cambridge University Press.
  4. Adult. UCI Machine Learning Repository. DOI: https://doi.org/10.24432/C5XW20.
  5. Answer-set programs for reasoning about counterfactual interventions and responsibility scores for classification. In Proc. ILP 2021, volume 13191 of LNCS, pp. 41–56. Springer.
  6. Bohanec, M. 1997. Car Evaluation. UCI Machine Learning Repository. DOI: https://doi.org/10.24432/C5JP48.
  7. Answer set programming at a glance.
  8. Byrne, R. M. J. Counterfactuals in explainable artificial intelligence (XAI): evidence from human reasoning. In Proc. IJCAI 2019, pp. 6276–6282.
  9. Knowledge representation, reasoning, and the design of intelligent agents: Answer Set Programming approach. Cambridge Univ. Press.
  10. Automating commonsense reasoning with asp and s(casp) *.
  11. Hofmann, H. 1994. Statlog (German Credit Data). UCI Machine Learning Repository. DOI: https://doi.org/10.24432/C5NC77.
  12. Model-agnostic counterfactual explanations for consequential decisions. In AISTATS 2020, volume 108 of Proceedings of Machine Learning Research, pp. 895–905. PMLR.
  13. Algorithmic recourse: from counterfactual explanations to interventions. In Proc. ACM FAccT 2021, pp. 353–362.
  14. Lloyd, J. W. Foundations of logic programming. In Symbolic Computation 1987.
  15. NeSyFOLD: A framework for interpretable image classification. In Proc. AAAI 2024a, pp. 4378–4387. AAAI Press.
  16. Using logic programming and kernel-grouping for improving interpretability of convolutional neural networks. In Proc. PADL 2024b, volume 14512 of LNCS, pp. 134–150. Springer.
  17. Pearl, J. 2009. Causal inference in statistics: An overview.
  18. Russell, C. Efficient search for diverse coherent explanations. In Proc. ACM FAT 2019, 20–28.
  19. A new algorithm to automate inductive learning of default theories.
  20. Interpretable predictions of tree-based ensembles via actionable feature tweaking. In Proc. ACM SIGKDD 2017, pp. 465–474.
  21. Actionable recourse in linear classification. In Proc. FAT 2019, pp. 10–19.
  22. Counterfactual explanations without opening the black box: Automated decisions and the GDPR.
  23. FOLD-R++: A scalable toolset for automated inductive learning of default theories from mixed data. In Proc. FLOPS 2022, volume 13215 of LNCS, pp. 224–242. Springer.
  24. FOLD-SE: an efficient rule-based machine learning algorithm with scalable explainability.
  25. Measurable counterfactual local explanations for any classifier. In Proc. ECAI 2020, volume 325, pp. 2529–2535.
List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

Summary

We haven't generated a summary for this paper yet.

Dice Question Streamline Icon: https://streamlinehq.com

Follow-Up Questions

We haven't generated follow-up questions for this paper yet.

X Twitter Logo Streamline Icon: https://streamlinehq.com

Tweets