Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

AGM Belief Revision, Semantically (2112.13557v2)

Published 27 Dec 2021 in cs.AI and cs.LO

Abstract: We establish a generic, model-theoretic characterization of belief revision operators implementing the paradigm of minimal change according to the seminal work by Alchourr\'{o}n, G\"{a}rdenfors, and Makinson (AGM). Our characterization applies to all Tarskian logics, that is, all logics with a classical model-theoretic semantics, and hence a wide variety of formalisms used in knowledge representation and beyond, including many for which a model-theoretic characterization has hitherto been lacking. Our starting point is the approach by Katsuno and Mendelzon (K&M), who provided such a characterization for propositional logic over finite signatures. We generalize K&M's approach to the setting of AGM-style revision over bases in arbitrary Tarskian logics, where base may refer to one of the various ways of representing an agent's beliefs (such as belief sets, arbitrary or finite sets of sentences, or single sentences). Our first core result is a representation theorem providing a two-way correspondence between AGM-style revision operators and specific assignments: functions associating every base to a "preference" relation over interpretations, which must be total but is - in contrast to prior approaches - not always transitive. As our second core contribution, we provide a characterization of all logics for which our result can be strengthened to assignments producing transitive preference relations (as in K&M's original work). Alongside these main contributions, we discuss diverse variants of our findings as well as ramifications for other areas of belief revision theory.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Faiq Miftakhul Falakh (4 papers)
  2. Sebastian Rudolph (31 papers)
  3. Kai Sauerwald (14 papers)

Summary

We haven't generated a summary for this paper yet.