Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
139 tokens/sec
GPT-4o
47 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Randomized Block-Coordinate Optimistic Gradient Algorithms for Root-Finding Problems (2301.03113v3)

Published 8 Jan 2023 in math.OC and stat.ML

Abstract: In this paper, we develop two new randomized block-coordinate optimistic gradient algorithms to approximate a solution of nonlinear equations in large-scale settings, which are called root-finding problems. Our first algorithm is non-accelerated with constant stepsizes, and achieves $\mathcal{O}(1/k)$ best-iterate convergence rate on $\mathbb{E}[ \Vert Gxk\Vert2]$ when the underlying operator $G$ is Lipschitz continuous and satisfies a weak Minty solution condition, where $\mathbb{E}[\cdot]$ is the expectation and $k$ is the iteration counter. Our second method is a new accelerated randomized block-coordinate optimistic gradient algorithm. We establish both $\mathcal{O}(1/k2)$ and $o(1/k2)$ last-iterate convergence rates on both $\mathbb{E}[ \Vert Gxk\Vert2]$ and $\mathbb{E}[ \Vert x{k+1} - x{k}\Vert2]$ for this algorithm under the co-coerciveness of $G$. In addition, we prove that the iterate sequence ${xk}$ converges to a solution almost surely, and $\Vert Gxk\Vert2$ attains a $o(1/k)$ almost sure convergence rate. Then, we apply our methods to a class of large-scale finite-sum inclusions, which covers prominent applications in machine learning, statistical learning, and network optimization, especially in federated learning. We obtain two new federated learning-type algorithms and their convergence rate guarantees for solving this problem class.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (1)
Citations (6)

Summary

We haven't generated a summary for this paper yet.