Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
166 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
42 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

On Exploiting Hitting Sets for Model Reconciliation (2012.09274v3)

Published 16 Dec 2020 in cs.AI and cs.LO

Abstract: In human-aware planning, a planning agent may need to provide an explanation to a human user on why its plan is optimal. A popular approach to do this is called model reconciliation, where the agent tries to reconcile the differences in its model and the human's model such that the plan is also optimal in the human's model. In this paper, we present a logic-based framework for model reconciliation that extends beyond the realm of planning. More specifically, given a knowledge base $KB_1$ entailing a formula $\varphi$ and a second knowledge base $KB_2$ not entailing it, model reconciliation seeks an explanation, in the form of a cardinality-minimal subset of $KB_1$, whose integration into $KB_2$ makes the entailment possible. Our approach, based on ideas originating in the context of analysis of inconsistencies, exploits the existing hitting set duality between minimal correction sets (MCSes) and minimal unsatisfiable sets (MUSes) in order to identify an appropriate explanation. However, differently from those works targeting inconsistent formulas, which assume a single knowledge base, MCSes and MUSes are computed over two distinct knowledge bases. We conclude our paper with an empirical evaluation of the newly introduced approach on planning instances, where we show how it outperforms an existing state-of-the-art solver, and generic non-planning instances from recent SAT competitions, for which no other solver exists.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (32)
  1. On the logic of theory change: Safe contraction. Studia logica 44(4): 405–422.
  2. Towards robust interpretability with self-explaining neural networks. In NeurIPS, 7775–7784.
  3. Handbook of Satisfiability.
  4. A Compilation of the Full PDDL+ Language into SMT. In ICAPS, 79–87.
  5. Balancing Explicability and Explanations in Human-Aware Planning. In IJCAI, 1335–1343.
  6. Plan Explanations as Model Reconciliation: Moving Beyond Explanation as Soliloquy. In IJCAI, 156–163.
  7. Solving MAXSAT by Solving a Sequence of Simpler SAT Instances. In CP, 225–239.
  8. A Machine Program for Theorem Proving. Communications of the ACM 5(7): 394–397.
  9. Characterizing Diagnoses and Systems. Artificial Intelligence 56(2-3): 197–222.
  10. Improving interpretability of deep neural networks with semantic information. In CVPR, 4306–4314.
  11. Gärdenfors, P. 1986. Belief revisions and the Ramsey test for conditionals. The Philosophical Review 95(1): 81–93.
  12. Belief Revision. Computational Complexity 63(6).
  13. Explaining Explanations: An Overview of Interpretability of Machine Learning. In DSAA, 80–89.
  14. A Survey of Methods for Explaining Black Box Models. ACM Computing Survey 51(5): 1–42.
  15. PySAT: A Python Toolkit for Prototyping with SAT Oracles. In SAT, 428–437.
  16. Smallest MUS Extraction with Minimal Hitting Set Dualization. In CP, 173–182.
  17. Algorithms for computing backbones of propositional formulae. AI Communications 28(2): 161–177.
  18. Kambhampati, S. 1990. A classification of plan modification strategies based on coverage and information requirements. AAAI Spring Symposium Series .
  19. Encoding plans in propositional logic. In KR, 374–384.
  20. Planning as Satisfiability. In ECAI, 359–363.
  21. Langley, P. 2016. Explainable agency in human-robot interaction. AAAI Fall Symposium Series .
  22. MaxSAT, Hard and Soft Constraints. Handbook of Satisfiability 185: 613–631.
  23. Fast, flexible MUS enumeration. Constraints 21(2): 223–250.
  24. Algorithms for computing minimal unsatisfiable subsets of constraints. Journal of Automated Reasoning 40(1): 1–33.
  25. On Computing Minimal Correction Subsets. In IJCAI, 615–622.
  26. Miller, T. 2018. Explanation in artificial intelligence: Insights from the social sciences. Artificial Intelligence 267: 1–38.
  27. Improving the explainability of random forest classifier–user centered approach. Pacific Symposium on Biocomputing 23: 204–215.
  28. Partial MUS Enumeration. In AAAI, 818–825.
  29. Reiter, R. 1987. A theory of diagnosis from first principles. Artificial Intelligence 32(1): 57–95.
  30. A Preliminary Logic-based Approach for Explanation Generation. In ICAPS Workshop on XAIP, 132–140.
  31. On the Relationship Between KR Approaches for Explainable Planning. In ICAPS Workshop on XAIP.
  32. Plan explicability and predictability for robot task planning. In ICRA, 1313–1320.
Citations (25)

Summary

We haven't generated a summary for this paper yet.