Emergent Mind

Smoothed Online Optimization with Unreliable Predictions

(2202.03519)
Published Feb 7, 2022 in cs.LG and cs.DS

Abstract

We examine the problem of smoothed online optimization, where a decision maker must sequentially choose points in a normed vector space to minimize the sum of per-round, non-convex hitting costs and the costs of switching decisions between rounds. The decision maker has access to a black-box oracle, such as a machine learning model, that provides untrusted and potentially inaccurate predictions of the optimal decision in each round. The goal of the decision maker is to exploit the predictions if they are accurate, while guaranteeing performance that is not much worse than the hindsight optimal sequence of decisions, even when predictions are inaccurate. We impose the standard assumption that hitting costs are globally $\alpha$-polyhedral. We propose a novel algorithm, Adaptive Online Switching (AOS), and prove that, for a large set of feasible $\delta > 0$, it is $(1+\delta)$-competitive if predictions are perfect, while also maintaining a uniformly bounded competitive ratio of $2{\tilde{\mathcal{O}}(1/(\alpha \delta))}$ even when predictions are adversarial. Further, we prove that this trade-off is necessary and nearly optimal in the sense that \emph{any} deterministic algorithm which is $(1+\delta)$-competitive if predictions are perfect must be at least $2{\tilde{\Omega}(1/(\alpha \delta))}$-competitive when predictions are inaccurate. In fact, we observe a unique threshold-type behavior in this trade-off: if $\delta$ is not in the set of feasible options, then \emph{no} algorithm is simultaneously $(1 + \delta)$-competitive if predictions are perfect and $\zeta$-competitive when predictions are inaccurate for any $\zeta < \infty$. Furthermore, we discuss that memory is crucial in AOS by proving that any algorithm that does not use memory cannot benefit from predictions. We complement our theoretical results by a numerical study on a microgrid application.

We're not able to analyze this paper right now due to high demand.

Please check back later (sorry!).

Generate a summary of this paper on our Pro plan:

We ran into a problem analyzing this paper.

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.