Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 150 tok/s
Gemini 2.5 Pro 47 tok/s Pro
GPT-5 Medium 33 tok/s Pro
GPT-5 High 34 tok/s Pro
GPT-4o 113 tok/s Pro
Kimi K2 211 tok/s Pro
GPT OSS 120B 444 tok/s Pro
Claude Sonnet 4.5 37 tok/s Pro
2000 character limit reached

An Exponential Speedup in Parallel Running Time for Submodular Maximization without Loss in Approximation (1804.06355v1)

Published 17 Apr 2018 in cs.DS

Abstract: In this paper we study the adaptivity of submodular maximization. Adaptivity quantifies the number of sequential rounds that an algorithm makes when function evaluations can be executed in parallel. Adaptivity is a fundamental concept that is heavily studied across a variety of areas in computer science, largely due to the need for parallelizing computation. For the canonical problem of maximizing a monotone submodular function under a cardinality constraint, it is well known that a simple greedy algorithm achieves a $1-1/e$ approximation and that this approximation is optimal for polynomial-time algorithms. Somewhat surprisingly, despite extensive efforts on submodular optimization for large-scale datasets, until very recently there was no known algorithm that achieves a constant factor approximation for this problem whose adaptivity is sublinear in the size of the ground set $n$. Recent work by Balkanski and Singer describes an algorithm that obtains an approximation arbitrarily close to $1/3$ in $\mathcal{O}(\log n)$ adaptive rounds and shows that no algorithm can obtain a constant factor approximation in $\tilde{o}(\log n)$ adaptive rounds. This approach achieves an exponential speedup in adaptivity (and parallel running time) at the expense of approximation quality. In this paper we describe a novel approach that yields an algorithm whose approximation is arbitrarily close to the optimal $1-1/e$ guarantee in $\mathcal{O}(\log n)$ adaptive rounds. This algorithm therefore achieves an exponential speedup in parallel running time for submodular maximization at the expense of an arbitrarily small loss in approximation quality. This guarantee is optimal in both approximation and adaptivity, up to lower order terms.

Citations (85)

Summary

We haven't generated a summary for this paper yet.

Lightbulb Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.