Emergent Mind

Abstract

Shafer's theory of belief and the Bayesian theory of probability are two alternative and mutually inconsistent approaches toward modelling uncertainty in artificial intelligence. To help reduce the conflict between these two approaches, this paper reexamines expected utility theory-from which Bayesian probability theory is derived. Expected utility theory requires the decision maker to assign a utility to each decision conditioned on every possible event that might occur. But frequently the decision maker cannot foresee all the events that might occur, i.e., one of the possible events is the occurrence of an unforeseen event. So once we acknowledge the existence of unforeseen events, we need to develop some way of assigning utilities to decisions conditioned on unforeseen events. The commonsensical solution to this problem is to assign similar utilities to events which are similar. Implementing this commonsensical solution is equivalent to replacing Bayesian subjective probabilities over the space of foreseen and unforeseen events by random set theory probabilities over the space of foreseen events. This leads to an expected utility principle in which normalized variants of Shafer's commonalities play the role of subjective probabilities. Hence allowing for unforeseen events in decision analysis causes Bayesian probability theory to become much more similar to Shaferian theory.

We're not able to analyze this paper right now due to high demand.

Please check back later (sorry!).

Generate a summary of this paper on our Pro plan:

We ran into a problem analyzing this paper.

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.