Emergent Mind

Parameter-free Regret in High Probability with Heavy Tails

(2210.14355)
Published Oct 25, 2022 in stat.ML and cs.LG

Abstract

We present new algorithms for online convex optimization over unbounded domains that obtain parameter-free regret in high-probability given access only to potentially heavy-tailed subgradient estimates. Previous work in unbounded domains considers only in-expectation results for sub-exponential subgradients. Unlike in the bounded domain case, we cannot rely on straight-forward martingale concentration due to exponentially large iterates produced by the algorithm. We develop new regularization techniques to overcome these problems. Overall, with probability at most $\delta$, for all comparators $\mathbf{u}$ our algorithm achieves regret $\tilde{O}(| \mathbf{u} | T{1/\mathfrak{p}} \log (1/\delta))$ for subgradients with bounded $\mathfrak{p}{th}$ moments for some $\mathfrak{p} \in (1, 2]$.

We're not able to analyze this paper right now due to high demand.

Please check back later (sorry!).

Generate a summary of this paper on our Pro plan:

We ran into a problem analyzing this paper.

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.