Projection-free Online Learning over Strongly Convex Sets (2010.08177v2)
Abstract: To efficiently solve online problems with complicated constraints, projection-free algorithms including online frank-wolfe (OFW) and its variants have received significant interest recently. However, in the general case, existing efficient projection-free algorithms only achieved the regret bound of $O(T{3/4})$, which is worse than the regret of projection-based algorithms, where $T$ is the number of decision rounds. In this paper, we study the special case of online learning over strongly convex sets, for which we first prove that OFW can enjoy a better regret bound of $O(T{2/3})$ for general convex losses. The key idea is to refine the decaying step-size in the original OFW by a simple line search rule. Furthermore, for strongly convex losses, we propose a strongly convex variant of OFW by redefining the surrogate loss function in OFW. We show that it achieves a regret bound of $O(T{2/3})$ over general convex sets and a better regret bound of $O(\sqrt{T})$ over strongly convex sets.
Collections
Sign up for free to add this paper to one or more collections.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.