Emergent Mind

A closer look at temporal variability in dynamic online learning

(2102.07666)
Published Feb 15, 2021 in cs.LG

Abstract

This work focuses on the setting of dynamic regret in the context of online learning with full information. In particular, we analyze regret bounds with respect to the temporal variability of the loss functions. By assuming that the sequence of loss functions does not vary much with time, we show that it is possible to incur improved regret bounds compared to existing results. The key to our approach is to use the loss function (and not its gradient) during the optimization process. Building on recent advances in the analysis of Implicit algorithms, we propose an adaptation of the Implicit version of Online Mirror Descent to the dynamic setting. Our proposed algorithm is adaptive not only to the temporal variability of the loss functions, but also to the path length of the sequence of comparators when an upper bound is known. Furthermore, our analysis reveals that our results are tight and cannot be improved without further assumptions. Next, we show how our algorithm can be applied to the setting of learning with expert advice or to settings with composite loss functions. Finally, when an upper bound to the path-length is not fixed beforehand we show how to combine a greedy strategy with existing strongly-adaptive algorithms to compete optimally against different sequences of comparators simultaneously.

We're not able to analyze this paper right now due to high demand.

Please check back later (sorry!).

Generate a summary of this paper on our Pro plan:

We ran into a problem analyzing this paper.

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.