Emergent Mind

Labelings vs. Embeddings: On Distributed Representations of Distances

(1907.06857)
Published Jul 16, 2019 in cs.DS and cs.CG

Abstract

We investigate for which metric spaces the performance of distance labeling and of $\ell\infty$-embeddings differ, and how significant can this difference be. Recall that a distance labeling is a distributed representation of distances in a metric space $(X,d)$, where each point $x\in X$ is assigned a succinct label, such that the distance between any two points $x,y \in X$ can be approximated given only their labels. A highly structured special case is an embedding into $\ell\infty$, where each point $x\in X$ is assigned a vector $f(x)$ such that $|f(x)-f(y)|\infty$ is approximately $d(x,y)$. The performance of a distance labeling or an $\ell\infty$-embedding is measured via its distortion and its label-size/dimension. We also study the analogous question for the prioritized versions of these two measures. Here, a priority order $\pi=(x1,\dots,xn)$ of the point set $X$ is given, and higher-priority points should have shorter labels. Formally, a distance labeling has prioritized label-size $\alpha(\cdot)$ if every $xj$ has label size at most $\alpha(j)$. Similarly, an embedding $f: X \to \ell\infty$ has prioritized dimension $\alpha(\cdot)$ if $f(x_j)$ is non-zero only in the first $\alpha(j)$ coordinates. In addition, we compare these prioritized measures to their classical (worst-case) versions. We answer these questions in several scenarios, uncovering a surprisingly diverse range of behaviors. First, in some cases labelings and embeddings have very similar worst-case performance, but in other cases there is a huge disparity. However in the prioritized setting, we most often find a strict separation between the performance of labelings and embeddings. And finally, when comparing the classical and prioritized settings, we find that the worst-case bound for label size often "translates" to a prioritized one, but also find a surprising exception to this rule.

We're not able to analyze this paper right now due to high demand.

Please check back later (sorry!).

Generate a summary of this paper on our Pro plan:

We ran into a problem analyzing this paper.

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.