Emergent Mind

Efficient Tracking of Large Classes of Experts

(1110.2755)
Published Oct 12, 2011 in cs.LG , cs.IT , and math.IT

Abstract

In the framework of prediction of individual sequences, sequential prediction methods are to be constructed that perform nearly as well as the best expert from a given class. We consider prediction strategies that compete with the class of switching strategies that can segment a given sequence into several blocks, and follow the advice of a different "base" expert in each block. As usual, the performance of the algorithm is measured by the regret defined as the excess loss relative to the best switching strategy selected in hindsight for the particular sequence to be predicted. In this paper we construct prediction strategies of low computational cost for the case where the set of base experts is large. In particular we provide a method that can transform any prediction algorithm $\A$ that is designed for the base class into a tracking algorithm. The resulting tracking algorithm can take advantage of the prediction performance and potential computational efficiency of $\A$ in the sense that it can be implemented with time and space complexity only $O(n{\gamma} \ln n)$ times larger than that of $\A$, where $n$ is the time horizon and $\gamma \ge 0$ is a parameter of the algorithm. With $\A$ properly chosen, our algorithm achieves a regret bound of optimal order for $\gamma>0$, and only $O(\ln n)$ times larger than the optimal order for $\gamma=0$ for all typical regret bound types we examined. For example, for predicting binary sequences with switching parameters under the logarithmic loss, our method achieves the optimal $O(\ln n)$ regret rate with time complexity $O(n{1+\gamma}\ln n)$ for any $\gamma\in (0,1)$.

We're not able to analyze this paper right now due to high demand.

Please check back later (sorry!).

Generate a summary of this paper on our Pro plan:

We ran into a problem analyzing this paper.

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.