Papers
Topics
Authors
Recent
Detailed Answer
Quick Answer
Concise responses based on abstracts only
Detailed Answer
Well-researched responses based on abstracts and relevant paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses
Gemini 2.5 Flash
Gemini 2.5 Flash 63 tok/s
Gemini 2.5 Pro 49 tok/s Pro
GPT-5 Medium 14 tok/s Pro
GPT-5 High 19 tok/s Pro
GPT-4o 100 tok/s Pro
Kimi K2 174 tok/s Pro
GPT OSS 120B 472 tok/s Pro
Claude Sonnet 4 37 tok/s Pro
2000 character limit reached

Graph Sparsification for Derandomizing Massively Parallel Computation with Low Space (1912.05390v3)

Published 11 Dec 2019 in cs.DS and cs.DC

Abstract: The Massively Parallel Computation (MPC) model is an emerging model which distills core aspects of distributed and parallel computation. It has been developed as a tool to solve (typically graph) problems in systems where the input is distributed over many machines with limited space. Recent work has focused on the regime in which machines have sublinear (in $n$, the number of nodes in the input graph) memory, with randomized algorithms presented for fundamental graph problems of Maximal Matching and Maximal Independent Set. However, there have been no prior corresponding \emph{deterministic} algorithms. A major challenge underlying the sublinear space setting is that the local space of each machine might be too small to store all the edges incident to a single node. This poses a considerable obstacle compared to the classical models in which each node is assumed to know and have easy access to its incident edges. To overcome this barrier we introduce a new \emph{graph sparsification technique} that \emph{deterministically} computes a low-degree subgraph with additional desired properties. Using this framework to derandomize the well-known randomized algorithm of Luby [SICOMP'86], we obtain $O(\log \Delta+\log\log n)$-round \emph{deterministic} MPC algorithms for solving the fundamental problems of \emph{Maximal Matching} and \emph{Maximal Independent Set} with $O(n{\epsilon})$ space on each machine for any constant $\epsilon > 0$. Based on the recent work of Ghaffari et al. [FOCS'18], this additive $O(\log\log n)$ factor is \emph{conditionally} essential. These algorithms can also be shown to run in $O(\log \Delta)$ rounds in the closely related model of \congc, improving upon the state-of-the-art bound of $O(\log2 \Delta)$ rounds by Censor-Hillel et al. [DISC'17].

Citations (30)
List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

Summary

We haven't generated a summary for this paper yet.

Dice Question Streamline Icon: https://streamlinehq.com

Follow-Up Questions

We haven't generated follow-up questions for this paper yet.