Papers
Topics
Authors
Recent
Detailed Answer
Quick Answer
Concise responses based on abstracts only
Detailed Answer
Well-researched responses based on abstracts and relevant paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses
Gemini 2.5 Flash
Gemini 2.5 Flash 83 tok/s
Gemini 2.5 Pro 42 tok/s Pro
GPT-5 Medium 30 tok/s Pro
GPT-5 High 36 tok/s Pro
GPT-4o 108 tok/s Pro
Kimi K2 220 tok/s Pro
GPT OSS 120B 473 tok/s Pro
Claude Sonnet 4 39 tok/s Pro
2000 character limit reached

The Cost of a Reductions Approach to Private Fair Optimization (1906.09613v4)

Published 23 Jun 2019 in cs.LG, cs.CR, and stat.ML

Abstract: Through the lens of information-theoretic reductions, we examine a reductions approach to fair optimization and learning where a black-box optimizer is used to learn a fair model for classification or regression. Quantifying the complexity, both statistically and computationally, of making such models satisfy the rigorous definition of differential privacy is our end goal. We resolve a few open questions and show applicability to fair machine learning, hypothesis testing, and to optimizing non-standard measures of classification loss. Furthermore, our sample complexity bounds are tight amongst all strategies that jointly minimize a composition of functions. The reductions approach to fair optimization can be abstracted as the constrained group-objective optimization problem where we aim to optimize an objective that is a function of losses of individual groups, subject to some constraints. We give the first polynomial-time algorithms to solve the problem with $(\epsilon, 0)$ or $(\epsilon, \delta)$ differential privacy guarantees when defined on a convex decision set (for example, the $\ell_P$ unit ball) with convex constraints and losses. Accompanying information-theoretic lower bounds for the problem are presented. In addition, compared to a previous method for ensuring differential privacy subject to a relaxed form of the equalized odds fairness constraint, the $(\epsilon, \delta)$-differentially private algorithm we present provides asymptotically better sample complexity guarantees, resulting in an exponential improvement in certain parameter regimes. We introduce a class of bounded divergence linear optimizers, which could be of independent interest, and specialize to pure and approximate differential privacy.

Citations (3)
List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

Summary

We haven't generated a summary for this paper yet.

Dice Question Streamline Icon: https://streamlinehq.com

Follow-Up Questions

We haven't generated follow-up questions for this paper yet.

Authors (1)