Emergent Mind

Matching while Learning

(1603.04549)
Published Mar 15, 2016 in cs.LG , cs.DS , stat.ME , and stat.ML

Abstract

We consider the problem faced by a service platform that needs to match limited supply with demand but also to learn the attributes of new users in order to match them better in the future. We introduce a benchmark model with heterogeneous "workers" (demand) and a limited supply of "jobs" that arrive over time. Job types are known to the platform, but worker types are unknown and must be learned by observing match outcomes. Workers depart after performing a certain number of jobs. The expected payoff from a match depends on the pair of types and the goal is to maximize the steady-state rate of accumulation of payoff. Though we use terminology inspired by labor markets, our framework applies more broadly to platforms where a limited supply of heterogeneous products is matched to users over time. Our main contribution is a complete characterization of the structure of the optimal policy in the limit that each worker performs many jobs. The platform faces a trade-off for each worker between myopically maximizing payoffs (exploitation) and learning the type of the worker (exploration). This creates a multitude of multi-armed bandit problems, one for each worker, coupled together by the constraint on availability of jobs of different types (capacity constraints). We find that the platform should estimate a shadow price for each job type, and use the payoffs adjusted by these prices, first, to determine its learning goals and then, for each worker, (i) to balance learning with payoffs during the "exploration phase," and (ii) to myopically match after it has achieved its learning goals during the "exploitation phase."

We're not able to analyze this paper right now due to high demand.

Please check back later (sorry!).

Generate a summary of this paper on our Pro plan:

We ran into a problem analyzing this paper.

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.