Emergent Mind

Unsupervised Basis Function Adaptation for Reinforcement Learning

(1703.01026)
Published Mar 3, 2017 in cs.AI , cs.LG , and stat.ML

Abstract

When using reinforcement learning (RL) algorithms to evaluate a policy it is common, given a large state space, to introduce some form of approximation architecture for the value function (VF). The exact form of this architecture can have a significant effect on the accuracy of the VF estimate, however, and determining a suitable approximation architecture can often be a highly complex task. Consequently there is a large amount of interest in the potential for allowing RL algorithms to adaptively generate approximation architectures. We investigate a method of adapting approximation architectures which uses feedback regarding the frequency with which an agent has visited certain states to guide which areas of the state space to approximate with greater detail. This method is "unsupervised" in the sense that it makes no direct reference to reward or the VF estimate. We introduce an algorithm based upon this idea which adapts a state aggregation approximation architecture on-line. A common method of scoring a VF estimate is to weight the squared Bellman error of each state-action by the probability of that state-action occurring. Adopting this scoring method, and assuming $S$ states, we demonstrate theoretically that - provided (1) the number of cells $X$ in the state aggregation architecture is of order $\sqrt{S}\log2{S}\ln{S}$ or greater, (2) the policy and transition function are close to deterministic, and (3) the prior for the transition function is uniformly distributed - our algorithm, used in conjunction with a suitable RL algorithm, can guarantee a score which is arbitrarily close to zero as $S$ becomes large. It is able to do this despite having only $O(X \log2S)$ space complexity and negligible time complexity. The results take advantage of certain properties of the stationary distributions of Markov chains.

We're not able to analyze this paper right now due to high demand.

Please check back later (sorry!).

Generate a summary of this paper on our Pro plan:

We ran into a problem analyzing this paper.

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.