Papers
Topics
Authors
Recent
2000 character limit reached

Towards Unifying Logical Entailment and Statistical Estimation (2202.13406v1)

Published 27 Feb 2022 in cs.AI

Abstract: This paper gives a generative model of the interpretation of formal logic for data-driven logical reasoning. The key idea is to represent the interpretation as likelihood of a formula being true given a model of formal logic. Using the likelihood, Bayes' theorem gives the posterior of the model being the case given the formula. The posterior represents an inverse interpretation of formal logic that seeks models making the formula true. The likelihood and posterior cause Bayesian learning that gives the probability of the conclusion being true in the models where all the premises are true. This paper looks at statistical and logical properties of the Bayesian learning. It is shown that the generative model is a unified theory of several different types of reasoning in logic and statistics.

Summary

We haven't generated a summary for this paper yet.

Whiteboard

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Authors (1)

Collections

Sign up for free to add this paper to one or more collections.