Impossibility of Differentially Private Universally Optimal Mechanisms (1008.0256v1)
Abstract: The notion of a universally utility-maximizing privacy mechanism was recently introduced by Ghosh, Roughgarden, and Sundararajan [STOC 2009]. These are mechanisms that guarantee optimal utility to a large class of information consumers, simultaneously, while preserving Differential Privacy [Dwork, McSherry, Nissim, and Smith, TCC 2006]. Ghosh et al. have demonstrated, quite surprisingly, a case where such a universally-optimal differentially-private mechanisms exists, when the information consumers are Bayesian. This result was recently extended by Gupte and Sundararajan [PODS 2010] to risk-averse consumers. Both positive results deal with mechanisms (approximately) computing a single count query (i.e., the number of individuals satisfying a specific property in a given population), and the starting point of our work is a trial at extending these results to similar settings, such as sum queries with non-binary individual values, histograms, and two (or more) count queries. We show, however, that universally-optimal mechanisms do not exist for all these queries, both for Bayesian and risk-averse consumers. For the Bayesian case, we go further, and give a characterization of those functions that admit universally-optimal mechanisms, showing that a universally-optimal mechanism exists, essentially, only for a (single) count query. At the heart of our proof is a representation of a query function $f$ by its privacy constraint graph $G_f$ whose edges correspond to values resulting by applying $f$ to neighboring databases.
Collections
Sign up for free to add this paper to one or more collections.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.