Online Learning of Smooth Functions (2301.01434v1)
Abstract: In this paper, we study the online learning of real-valued functions where the hidden function is known to have certain smoothness properties. Specifically, for $q \ge 1$, let $\mathcal F_q$ be the class of absolutely continuous functions $f: [0,1] \to \mathbb R$ such that $|f'|q \le 1$. For $q \ge 1$ and $d \in \mathbb Z+$, let $\mathcal F{q,d}$ be the class of functions $f: [0,1]d \to \mathbb R$ such that any function $g: [0,1] \to \mathbb R$ formed by fixing all but one parameter of $f$ is in $\mathcal F_q$. For any class of real-valued functions $\mathcal F$ and $p>0$, let $\text{opt}p(\mathcal F)$ be the best upper bound on the sum of $p{\text{th}}$ powers of absolute prediction errors that a learner can guarantee in the worst case. In the single-variable setup, we find new bounds for $\text{opt}_p(\mathcal F_q)$ that are sharp up to a constant factor. We show for all $\varepsilon \in (0, 1)$ that $\text{opt}{1+\varepsilon}(\mathcal{F}{\infty}) = \Theta(\varepsilon{-\frac{1}{2}})$ and $\text{opt}{1+\varepsilon}(\mathcal{F}q) = \Theta(\varepsilon{-\frac{1}{2}})$ for all $q \ge 2$. We also show for $\varepsilon \in (0,1)$ that $\text{opt}_2(\mathcal F{1+\varepsilon})=\Theta(\varepsilon{-1})$. In addition, we obtain new exact results by proving that $\text{opt}p(\mathcal F_q)=1$ for $q \in (1,2)$ and $p \ge 2+\frac{1}{q-1}$. In the multi-variable setup, we establish inequalities relating $\text{opt}_p(\mathcal F{q,d})$ to $\text{opt}p(\mathcal F_q)$ and show that $\text{opt}_p(\mathcal F{\infty,d})$ is infinite when $p<d$ and finite when $p>d$. We also obtain sharp bounds on learning $\mathcal F_{\infty,d}$ for $p < d$ when the number of trials is bounded.
Collections
Sign up for free to add this paper to one or more collections.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.