A Stochastic Second-Order Proximal Method for Distributed Optimization (2211.10591v1)
Abstract: In this paper, we propose a distributed stochastic second-order proximal method that enables agents in a network to cooperatively minimize the sum of their local loss functions without any centralized coordination. The proposed algorithm, referred to as St-SoPro, incorporates a decentralized second-order approximation into an augmented Lagrangian function, and then randomly samples the local gradients and Hessian matrices of the agents, so that it is computationally and memory-wise efficient, particularly for large-scale optimization problems. We show that for globally restricted strongly convex problems, the expected optimality error of St-SoPro asymptotically drops below an explicit error bound at a linear rate, and the error bound can be arbitrarily small with proper parameter settings. Simulations over real machine learning datasets demonstrate that St-SoPro outperforms several state-of-the-art distributed stochastic first-order methods in terms of convergence speed as well as computation and communication costs.
Collections
Sign up for free to add this paper to one or more collections.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.