Emergent Mind

Metric Distortion Bounds for Randomized Social Choice

(2111.03694)
Published Nov 5, 2021 in cs.GT , cs.DM , and cs.DS

Abstract

Consider the following social choice problem. Suppose we have a set of $n$ voters and $m$ candidates that lie in a metric space. The goal is to design a mechanism to choose a candidate whose average distance to the voters is as small as possible. However, the mechanism does not get direct access to the metric space. Instead, it gets each voter's ordinal ranking of the candidates by distance. Given only this partial information, what is the smallest worst-case approximation ratio (known as the distortion) that a mechanism can guarantee? A simple example shows that no deterministic mechanism can guarantee distortion better than $3$, and no randomized mechanism can guarantee distortion better than $2$. It has been conjectured that both of these lower bounds are optimal, and recently, Gkatzelis, Halpern, and Shah proved this conjecture for deterministic mechanisms. We disprove the conjecture for randomized mechanisms for $m \geq 3$ by constructing elections for which no randomized mechanism can guarantee distortion better than $2.0261$ for $m = 3$, $2.0496$ for $m = 4$, up to $2.1126$ as $m \to \infty$. We obtain our lower bounds by identifying a class of simple metrics that appear to capture much of the hardness of the problem, and we show that any randomized mechanism must have high distortion on one of these metrics. We provide a nearly matching upper bound for this restricted class of metrics as well. Finally, we conjecture that these bounds give the optimal distortion for every $m$, and provide a proof for $m = 3$, thereby resolving that case.

We're not able to analyze this paper right now due to high demand.

Please check back later (sorry!).

Generate a summary of this paper on our Pro plan:

We ran into a problem analyzing this paper.

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.