Emergent Mind

An Exponential Separation Between Randomized and Deterministic Complexity in the LOCAL Model

(1602.08166)
Published Feb 26, 2016 in cs.CC , cs.DC , and cs.DS

Abstract

Over the past 30 years numerous algorithms have been designed for symmetry breaking problems in the LOCAL model, such as maximal matching, MIS, vertex coloring, and edge-coloring. For most problems the best randomized algorithm is at least exponentially faster than the best deterministic algorithm. In this paper we prove that these exponential gaps are necessary and establish connections between the deterministic and randomized complexities in the LOCAL model. Each result has a very compelling take-away message: 1. Fast $\Delta$-coloring of trees requires random bits: Building on the recent lower bounds of Brandt et al., we prove that the randomized complexity of $\Delta$-coloring a tree with maximum degree $\Delta\ge 55$ is $\Theta(\log\Delta\log n)$, whereas its deterministic complexity is $\Theta(\log\Delta n)$ for any $\Delta\ge 3$. This also establishes a large separation between the deterministic complexity of $\Delta$-coloring and $(\Delta+1)$-coloring trees. 2. Randomized lower bounds imply deterministic lower bounds: We prove that any deterministic algorithm for a natural class of problems that runs in $O(1)+o(\log\Delta n)$ rounds can be transformed to run in $O(\logn-\log^\Delta+1)$ rounds. If the transformed algorithm violates a lower bound (even allowing randomization), then one can conclude that the problem requires $\Omega(\log\Delta n)$ time deterministically. 3. Deterministic lower bounds imply randomized lower bounds: We prove that the randomized complexity of any natural problem on instances of size $n$ is at least its deterministic complexity on instances of size $\sqrt{\log n}$. This shows that a deterministic $\Omega(\log\Delta n)$ lower bound for any problem implies a randomized $\Omega(\log\Delta\log n)$ lower bound. It also illustrates that the graph shattering technique is absolutely essential to the LOCAL model.

We're not able to analyze this paper right now due to high demand.

Please check back later (sorry!).

Generate a summary of this paper on our Pro plan:

We ran into a problem analyzing this paper.

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.