Papers
Topics
Authors
Recent
Detailed Answer
Quick Answer
Concise responses based on abstracts only
Detailed Answer
Well-researched responses based on abstracts and relevant paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses
Gemini 2.5 Flash
Gemini 2.5 Flash 72 tok/s
Gemini 2.5 Pro 57 tok/s Pro
GPT-5 Medium 43 tok/s Pro
GPT-5 High 23 tok/s Pro
GPT-4o 107 tok/s Pro
Kimi K2 219 tok/s Pro
GPT OSS 120B 465 tok/s Pro
Claude Sonnet 4 39 tok/s Pro
2000 character limit reached

Building a Good Team: Secretary Problems and the Supermodular Degree (1507.06199v1)

Published 22 Jul 2015 in cs.DS

Abstract: In the Secretary Problem, one has to hire the best among n candidates. The candidates are interviewed, one at a time, at a random order, and one has to decide on the spot, whether to hire a candidate or continue interviewing. It is well known that the best candidate can be hired with a probability of 1/e (Dynkin, 1963). Recent works extend this problem to settings in which multiple candidates can be hired, subject to some constraint. Here, one wishes to hire a set of candidates maximizing a given set function. Almost all extensions considered in the literature assume the objective set function is either linear or submodular. Unfortunately, real world functions might not have either of these properties. Consider, for example, a scenario where one hires researchers for a project. Indeed, it can be that some researchers can substitute others for that matter. However, it can also be that some combinations of researchers result in synergy (see, e.g, Woolley et al., Science 2010, for a research about collective intelligence). The first phenomenon can be modeled by a submoudlar set function, while the latter cannot. In this work, we study the secretary problem with an arbitrary non-negative monotone function, subject to a general matroid constraint. It is not difficult to prove that, generally, only very poor results can be obtained for this class of objective functions. We tackle this hardness by combining the following: 1.Parametrizing our algorithms by the supermodular degree of the objective function (defined by Feige and Izsak, ITCS 2013), which, roughly speaking, measures the distance of a function from being submodular. 2.Suggesting an (arguably) natural model that permits approximation guarantees that are polynomial in the supermodular degree (as opposed to the standard model which allows only exponential guarantees).

Citations (27)
List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

Summary

We haven't generated a summary for this paper yet.

Dice Question Streamline Icon: https://streamlinehq.com

Follow-Up Questions

We haven't generated follow-up questions for this paper yet.

Don't miss out on important new AI/ML research

See which papers are being discussed right now on X, Reddit, and more:

“Emergent Mind helps me see which AI papers have caught fire online.”

Philip

Philip

Creator, AI Explained on YouTube