Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Last Iterate is Slower than Averaged Iterate in Smooth Convex-Concave Saddle Point Problems (2002.00057v2)

Published 31 Jan 2020 in cs.LG, math.OC, and stat.ML

Abstract: In this paper we study the smooth convex-concave saddle point problem. Specifically, we analyze the last iterate convergence properties of the Extragradient (EG) algorithm. It is well known that the ergodic (averaged) iterates of EG converge at a rate of $O(1/T)$ (Nemirovski, 2004). In this paper, we show that the last iterate of EG converges at a rate of $O(1/\sqrt{T})$. To the best of our knowledge, this is the first paper to provide a convergence rate guarantee for the last iterate of EG for the smooth convex-concave saddle point problem. Moreover, we show that this rate is tight by proving a lower bound of $\Omega(1/\sqrt{T})$ for the last iterate. This lower bound therefore shows a quadratic separation of the convergence rates of ergodic and last iterates in smooth convex-concave saddle point problems.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Noah Golowich (48 papers)
  2. Sarath Pattathil (17 papers)
  3. Constantinos Daskalakis (111 papers)
  4. Asuman Ozdaglar (102 papers)
Citations (98)

Summary

We haven't generated a summary for this paper yet.