Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 134 tok/s
Gemini 2.5 Pro 41 tok/s Pro
GPT-5 Medium 25 tok/s Pro
GPT-5 High 26 tok/s Pro
GPT-4o 58 tok/s Pro
Kimi K2 194 tok/s Pro
GPT OSS 120B 427 tok/s Pro
Claude Sonnet 4.5 37 tok/s Pro
2000 character limit reached

Accelerated Stochastic ADMM with Variance Reduction (1611.04074v4)

Published 13 Nov 2016 in cs.NA and math.OC

Abstract: Alternating Direction Method of Multipliers (ADMM) is a popular method for solving large-scale Machine Learning problems. Stochastic ADMM was proposed to reduce the per iteration computational complexity, which is more suitable for big data problems. Recently, variance reduction techniques have been integrated with stochastic ADMM in order to get a faster convergence rate, such as SAG-ADMM and SVRG-ADMM. However, their convergence rate is still suboptimal w.r.t the smoothness constant. In this paper, we propose an accelerated stochastic ADMM algorithm with variance reduction, which enjoys a faster convergence than all the existing stochastic ADMM algorithms. We theoretically analyse its convergence rate and show its dependence on the smoothness constant is optimal. We also empirically validate its effectiveness and show its priority over other stochastic ADMM algorithms.

Summary

We haven't generated a summary for this paper yet.

Dice Question Streamline Icon: https://streamlinehq.com

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Lightbulb Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.