Emergent Mind

Gradient-Variation Bound for Online Convex Optimization with Constraints

(2006.12455)
Published Jun 22, 2020 in math.OC , cs.LG , and stat.ML

Abstract

We study online convex optimization with constraints consisting of multiple functional constraints and a relatively simple constraint set, such as a Euclidean ball. As enforcing the constraints at each time step through projections is computationally challenging in general, we allow decisions to violate the functional constraints but aim to achieve a low regret and cumulative violation of the constraints over a horizon of $T$ time steps. First-order methods achieve an $\mathcal{O}(\sqrt{T})$ regret and an $\mathcal{O}(1)$ constraint violation, which is the best-known bound under the Slater's condition, but do not take into account the structural information of the problem. Furthermore, the existing algorithms and analysis are limited to Euclidean space. In this paper, we provide an \emph{instance-dependent} bound for online convex optimization with complex constraints obtained by a novel online primal-dual mirror-prox algorithm. Our instance-dependent regret is quantified by the total gradient variation $V*(T)$ in the sequence of loss functions. The proposed algorithm works in \emph{general} normed spaces and simultaneously achieves an $\mathcal{O}(\sqrt{V*(T)})$ regret and an $\mathcal{O}(1)$ constraint violation, which is never worse than the best-known $( \mathcal{O}(\sqrt{T}), \mathcal{O}(1) )$ result and improves over previous works that applied mirror-prox-type algorithms for this problem achieving $\mathcal{O}(T{2/3})$ regret and constraint violation. Finally, our algorithm is computationally efficient, as it only performs mirror descent steps in each iteration instead of solving a general Lagrangian minimization problem.

We're not able to analyze this paper right now due to high demand.

Please check back later (sorry!).

Generate a summary of this paper on our Pro plan:

We ran into a problem analyzing this paper.

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.