Emergent Mind

Abstract

The realm of algorithms with predictions has led to the development of several new algorithms that leverage (potentially erroneous) predictions to enhance their performance guarantees. The challenge is to devise algorithms that achieve optimal approximation guarantees as the prediction quality varies from perfect (consistency) to imperfect (robustness). This framework is particularly appealing in mechanism design contexts, where predictions might convey private information about the agents. In this paper, we design strategyproof mechanisms that leverage predictions to achieve improved approximation guarantees for several variants of the Generalized Assignment Problem (GAP) in the private graph model. In this model, first introduced by Dughmi & Ghosh (2010), the set of resources that an agent is compatible with is private information. For the Bipartite Matching Problem (BMP), we give a deterministic group-strategyproof (GSP) mechanism that is $(1 +1/\gamma)$-consistent and $(1 + \gamma)$-robust, where $\gamma \ge 1$ is some confidence parameter. We also prove that this is best possible. Remarkably, our mechanism draws inspiration from the renowned Gale-Shapley algorithm, incorporating predictions as a crucial element. Additionally, we give a randomized mechanism that is universally GSP and improves on the guarantees in expectation. The other GAP variants that we consider all make use of a unified greedy mechanism that adds edges to the assignment according to a specific order. Our universally GSP mechanism randomizes over the greedy mechanism, our mechanism for BMP and the predicted assignment, leading to $(1+3/\gamma)$-consistency and $(3+\gamma)$-robustness in expectation. All our mechanisms also provide more fine-grained approximation guarantees that interpolate between the consistency and the robustness, depending on some natural error measure of the prediction.

We're not able to analyze this paper right now due to high demand.

Please check back later (sorry!).

Generate a summary of this paper on our Pro plan:

We ran into a problem analyzing this paper.

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.