Convergence of Extragradient SVRG for Variational Inequalities: Error Bounds and Increasing Iterate Averaging (2306.01796v2)
Abstract: We study the last-iterate convergence of variance reduction methods for extragradient (EG) algorithms for a class of variational inequalities satisfying error-bound conditions. Previously, last-iterate linear convergence was only known under strong monotonicity. We show that EG algorithms with SVRG-style variance reduction, denoted SVRG-EG, attain last-iterate linear convergence under a general error-bound condition much weaker than strong monotonicity. This condition captures a broad class of non-strongly monotone problems, such as bilinear saddle-point problems commonly encountered in two-player zero-sum Nash equilibrium computation. Next, we establish linear last-iterate convergence of SVRG-EG with an improved guarantee under the weak sharpness assumption. Furthermore, motivated by the empirical efficiency of increasing iterate averaging techniques in solving saddle-point problems, we also establish new convergence results for SVRG-EG with such techniques.
Collections
Sign up for free to add this paper to one or more collections.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.