Emergent Mind

Sharper bounds for online learning of smooth functions of a single variable

(2105.14648)
Published May 30, 2021 in cs.LG , cs.DM , and stat.ML

Abstract

We investigate the generalization of the mistake-bound model to continuous real-valued single variable functions. Let $\mathcal{F}q$ be the class of absolutely continuous functions $f: [0, 1] \rightarrow \mathbb{R}$ with $||f'||q \le 1$, and define $optp(\mathcal{F}q)$ as the best possible bound on the worst-case sum of the $p{th}$ powers of the absolute prediction errors over any number of trials. Kimber and Long (Theoretical Computer Science, 1995) proved for $q \ge 2$ that $optp(\mathcal{F}q) = 1$ when $p \ge 2$ and $optp(\mathcal{F}q) = \infty$ when $p = 1$. For $1 < p < 2$ with $p = 1+\epsilon$, the only known bound was $optp(\mathcal{F}{q}) = O(\epsilon{-1})$ from the same paper. We show for all $\epsilon \in (0, 1)$ and $q \ge 2$ that $opt{1+\epsilon}(\mathcal{F}q) = \Theta(\epsilon{-\frac{1}{2}})$, where the constants in the bound do not depend on $q$. We also show that $opt{1+\epsilon}(\mathcal{F}{\infty}) = \Theta(\epsilon{-\frac{1}{2}})$.

We're not able to analyze this paper right now due to high demand.

Please check back later (sorry!).

Generate a summary of this paper on our Pro plan:

We ran into a problem analyzing this paper.

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.