Emergent Mind

Abstract

One often encounters the curse of dimensionality in the application of dynamic programming to determine optimal policies for controlled Markov chains. In this paper, we provide a method to construct sub-optimal policies along with a bound for the deviation of such a policy from the optimum via a linear programming approach. The state-space is partitioned and the optimal cost-to-go or value function is approximated by a constant over each partition. By minimizing a non-negative cost function defined on the partitions, one can construct an approximate value function which also happens to be an upper bound for the optimal value function of the original Markov Decision Process (MDP). As a key result, we show that this approximate value function is {\it independent} of the non-negative cost function (or state dependent weights as it is referred to in the literature) and moreover, this is the least upper bound that one can obtain once the partitions are specified. Furthermore, we show that the restricted system of linear inequalities also embeds a family of MDPs of lower dimension, one of which can be used to construct a lower bound on the optimal value function. The construction of the lower bound requires the solution to a combinatorial problem. We apply the linear programming approach to a perimeter surveillance stochastic optimal control problem and obtain numerical results that corroborate the efficacy of the proposed methodology.

We're not able to analyze this paper right now due to high demand.

Please check back later (sorry!).

Generate a summary of this paper on our Pro plan:

We ran into a problem analyzing this paper.

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.