Emergent Mind

Abstract

Many real-world decision problems involve interaction of multiple self-interested agents with limited sensing ability. The partially observable stochastic game (POSG) provides a mathematical framework for posing these problems, however solving a POSG requires difficult reasoning about two critical factors: (1) information revealed by partial observations and (2) decisions other agents make. In the single agent case, partially observable Markov decision process (POMDP) planning can efficiently address partial observability with particle filtering. In the multi-agent case, imperfect information game solution methods account for other agent's decisions, but preclude belief approximation. We propose a unifying framework that combines POMDP-inspired state distribution approximation and game-theoretic equilibrium search on information sets. This approach enables online planning in POSGs with very large state spaces, paving the way for reliable autonomous interaction in real-world physical environments and complementing offline multi-agent reinforcement learning. Experiments in several zero-sum examples show that the new framework computes solutions for problems with both small and large state spaces.

We're not able to analyze this paper right now due to high demand.

Please check back later (sorry!).

Generate a summary of this paper on our Pro plan:

We ran into a problem analyzing this paper.

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.