Improved Algorithms for Convex-Concave Minimax Optimization (2006.06359v2)
Abstract: This paper studies minimax optimization problems $\min_x \max_y f(x,y)$, where $f(x,y)$ is $m_x$-strongly convex with respect to $x$, $m_y$-strongly concave with respect to $y$ and $(L_x,L_{xy},L_y)$-smooth. Zhang et al. provided the following lower bound of the gradient complexity for any first-order method: $\Omega\Bigl(\sqrt{\frac{L_x}{m_x}+\frac{L_{xy}2}{m_x m_y}+\frac{L_y}{m_y}}\ln(1/\epsilon)\Bigr).$ This paper proposes a new algorithm with gradient complexity upper bound $\tilde{O}\Bigl(\sqrt{\frac{L_x}{m_x}+\frac{L\cdot L_{xy}}{m_x m_y}+\frac{L_y}{m_y}}\ln\left(1/\epsilon\right)\Bigr),$ where $L=\max{L_x,L_{xy},L_y}$. This improves over the best known upper bound $\tilde{O}\left(\sqrt{\frac{L2}{m_x m_y}} \ln3\left(1/\epsilon\right)\right)$ by Lin et al. Our bound achieves linear convergence rate and tighter dependency on condition numbers, especially when $L_{xy}\ll L$ (i.e., when the interaction between $x$ and $y$ is weak). Via reduction, our new bound also implies improved bounds for strongly convex-concave and convex-concave minimax optimization problems. When $f$ is quadratic, we can further improve the upper bound, which matches the lower bound up to a small sub-polynomial factor.
Collections
Sign up for free to add this paper to one or more collections.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.