Emergent Mind

Abstract

Given a set D of tuples defined on a domain Omega, we study differentially private algorithms for constructing a histogram over Omega to approximate the tuple distribution in D. Existing solutions for the problem mostly adopt a hierarchical decomposition approach, which recursively splits Omega into sub-domains and computes a noisy tuple count for each sub-domain, until all noisy counts are below a certain threshold. This approach, however, requires that we (i) impose a limit h on the recursion depth in the splitting of Omega and (ii) set the noise in each count to be proportional to h. This leads to inferior data utility due to the following dilemma: if we use a small h, then the resulting histogram would be too coarse-grained to provide an accurate approximation of data distribution; meanwhile, a large h would yield a fine-grained histogram, but its quality would be severely degraded by the increased amount of noise in the tuple counts. To remedy the deficiency of existing solutions, we present PrivTree, a histogram construction algorithm that also applies hierarchical decomposition but features a crucial (and somewhat surprising) improvement: when deciding whether or not to split a sub-domain, the amount of noise required in the corresponding tuple count is independent of the recursive depth. This enables PrivTree to adaptively generate high-quality histograms without even asking for a pre-defined threshold on the depth of sub-domain splitting. As concrete examples, we demonstrate an application of PrivTree in modelling spatial data, and show that it can also be extended to handle sequence data (where the decision in sub-domain splitting is not based on tuple counts but a more sophisticated measure). Our experiments on a variety of real datasets show that PrivTree significantly outperforms the states of the art in terms of data utility.

We're not able to analyze this paper right now due to high demand.

Please check back later (sorry!).

Generate a summary of this paper on our Pro plan:

We ran into a problem analyzing this paper.

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.