Papers
Topics
Authors
Recent
Detailed Answer
Quick Answer
Concise responses based on abstracts only
Detailed Answer
Well-researched responses based on abstracts and relevant paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses
Gemini 2.5 Flash
Gemini 2.5 Flash 49 tok/s
Gemini 2.5 Pro 53 tok/s Pro
GPT-5 Medium 19 tok/s Pro
GPT-5 High 16 tok/s Pro
GPT-4o 103 tok/s Pro
Kimi K2 172 tok/s Pro
GPT OSS 120B 472 tok/s Pro
Claude Sonnet 4 39 tok/s Pro
2000 character limit reached

Faster Parallel Solver for Positive Linear Programs via Dynamically-Bucketed Selective Coordinate Descent (1511.06468v1)

Published 20 Nov 2015 in cs.DS and cs.NA

Abstract: We provide improved parallel approximation algorithms for the important class of packing and covering linear programs. In particular, we present new parallel $\epsilon$-approximate packing and covering solvers which run in $\tilde{O}(1/\epsilon2)$ expected time, i.e., in expectation they take $\tilde{O}(1/\epsilon2)$ iterations and they do $\tilde{O}(N/\epsilon2)$ total work, where $N$ is the size of the constraint matrix and $\epsilon$ is the error parameter, and where the $\tilde{O}$ hides logarithmic factors. To achieve our improvement, we introduce an algorithmic technique of broader interest: dynamically-bucketed selective coordinate descent (DB-SCD). At each step of the iterative optimization algorithm, the DB-SCD method dynamically buckets the coordinates of the gradient into those of roughly equal magnitude, and it updates all the coordinates in one of the buckets. This dynamically-bucketed updating permits us to take steps along several coordinates with similar-sized gradients, thereby permitting more appropriate step sizes at each step of the algorithm. In particular, this technique allows us to use in a straightforward manner the recent analysis from the breakthrough results of Allen-Zhu and Orecchia [2] to achieve our still-further improved bounds. More generally, this method addresses "interference" among coordinates, by which we mean the impact of the update of one coordinate on the gradients of other coordinates. Such interference is a core issue in parallelizing optimization routines that rely on smoothness properties. Since our DB-SCD method reduces interference via updating a selective subset of variables at each iteration, we expect it may also have more general applicability in optimization.

Citations (9)

Summary

We haven't generated a summary for this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

Lightbulb On Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.