Emergent Mind

Truly Optimal Euclidean Spanners

(1904.12042)
Published Apr 26, 2019 in cs.CG and cs.DS

Abstract

Euclidean spanners are important geometric structures, having found numerous applications over the years. Cornerstone results in this area from the late 80s and early 90s state that for any $d$-dimensional $n$-point Euclidean space, there exists a $(1+\epsilon)$-spanner with $nO(\epsilon{-d+1})$ edges and lightness $O(\epsilon{-2d})$. Surprisingly, the fundamental question of whether or not these dependencies on $\epsilon$ and $d$ for small $d$ can be improved has remained elusive, even for $d = 2$. This question naturally arises in any application of Euclidean spanners where precision is a necessity. The state-of-the-art bounds $nO(\epsilon{-d+1})$ and $O(\epsilon{-2d})$ on the size and lightness of spanners are realized by the {\em greedy} spanner. In 2016, Filtser and Solomon proved that, in low dimensional spaces, the greedy spanner is near-optimal. The question of whether the greedy spanner is truly optimal remained open to date. The contribution of this paper is two-fold. We resolve these longstanding questions by nailing down the exact dependencies on $\epsilon$ and $d$ and showing that the greedy spanner is truly optimal. Specifically, for any $d= O(1), \epsilon = \Omega({n}{-\frac{1}{d-1}})$: - We show that any $(1+\epsilon)$-spanner must have $n \Omega(\epsilon{-d+1})$ edges, implying that the greedy (and other) spanners achieve the optimal size. - We show that any $(1+\epsilon)$-spanner must have lightness $\Omega(\epsilon{-d})$, and then improve the upper bound on the lightness of the greedy spanner from $O(\epsilon{-2d})$ to $O(\epsilon{-d})$. We then complement our negative result for the size of spanners with a rather counterintuitive positive result: Steiner points lead to a quadratic improvement in the size of spanners! Our bound for the size of Steiner spanners is tight as well (up to lower-order terms).

We're not able to analyze this paper right now due to high demand.

Please check back later (sorry!).

Generate a summary of this paper on our Pro plan:

We ran into a problem analyzing this paper.

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.