Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 179 tok/s
Gemini 2.5 Pro 51 tok/s Pro
GPT-5 Medium 40 tok/s Pro
GPT-5 High 35 tok/s Pro
GPT-4o 103 tok/s Pro
Kimi K2 207 tok/s Pro
GPT OSS 120B 451 tok/s Pro
Claude Sonnet 4.5 35 tok/s Pro
2000 character limit reached

Probability Aggregates in Probability Answer Set Programming (1304.1684v1)

Published 5 Apr 2013 in cs.AI

Abstract: Probability answer set programming is a declarative programming that has been shown effective for representing and reasoning about a variety of probability reasoning tasks. However, the lack of probability aggregates, e.g. {\em expected values}, in the language of disjunctive hybrid probability logic programs (DHPP) disallows the natural and concise representation of many interesting problems. In this paper, we extend DHPP to allow arbitrary probability aggregates. We introduce two types of probability aggregates; a type that computes the expected value of a classical aggregate, e.g., the expected value of the minimum, and a type that computes the probability of a classical aggregate, e.g, the probability of sum of values. In addition, we define a probability answer set semantics for DHPP with arbitrary probability aggregates including monotone, antimonotone, and nonmonotone probability aggregates. We show that the proposed probability answer set semantics of DHPP subsumes both the original probability answer set semantics of DHPP and the classical answer set semantics of classical disjunctive logic programs with classical aggregates, and consequently subsumes the classical answer set semantics of the original disjunctive logic programs. We show that the proposed probability answer sets of DHPP with probability aggregates are minimal probability models and hence incomparable, which is an important property for nonmonotonic probability reasoning.

Summary

We haven't generated a summary for this paper yet.

Lightbulb Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.

Authors (1)

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

Don't miss out on important new AI/ML research

See which papers are being discussed right now on X, Reddit, and more:

“Emergent Mind helps me see which AI papers have caught fire online.”

Philip

Philip

Creator, AI Explained on YouTube