Emergent Mind

Decentralized Stochastic Gradient Descent Ascent for Finite-Sum Minimax Problems

(2212.02724)
Published Dec 6, 2022 in cs.LG , math.OC , and stat.ML

Abstract

Minimax optimization problems have attracted significant attention in recent years due to their widespread application in numerous machine learning models. To solve the minimax optimization problem, a wide variety of stochastic optimization methods have been proposed. However, most of them ignore the distributed setting where the training data is distributed on multiple workers. In this paper, we developed a novel decentralized stochastic gradient descent ascent method for the finite-sum minimax optimization problem. In particular, by employing the variance-reduced gradient, our method can achieve $O(\frac{\sqrt{n}\kappa3}{(1-\lambda)2\epsilon2})$ sample complexity and $O(\frac{\kappa3}{(1-\lambda)2\epsilon2})$ communication complexity for the nonconvex-strongly-concave minimax optimization problem. As far as we know, our work is the first one to achieve such theoretical complexities for this kind of problem. At last, we apply our method to optimize the AUC maximization problem and the experimental results confirm the effectiveness of our method.

We're not able to analyze this paper right now due to high demand.

Please check back later (sorry!).

Generate a summary of this paper on our Pro plan:

We ran into a problem analyzing this paper.

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.