Mechanism Design and Risk Aversion (1107.4722v3)
Abstract: We develop efficient algorithms to construct utility maximizing mechanisms in the presence of risk averse players (buyers and sellers) in Bayesian settings. We model risk aversion by a concave utility function, and players play strategically to maximize their expected utility. Bayesian mechanism design has usually focused on maximizing expected revenue in a {\em risk neutral} environment, and no succinct characterization of expected utility maximizing mechanisms is known even for single-parameter multi-unit auctions. We first consider the problem of designing optimal DSIC mechanism for a risk averse seller in the case of multi-unit auctions, and we give a poly-time computable SPM that is $(1-1/e-\eps)$-approximation to the expected utility of the seller in an optimal DSIC mechanism. Our result is based on a novel application of a correlation gap bound, along with {\em splitting} and {\em merging} of random variables to redistribute probability mass across buyers. This allows us to reduce our problem to that of checking feasibility of a small number of distinct configurations, each of which corresponds to a covering LP. A feasible solution to the LP gives us the distribution on prices for each buyer to use in a randomized SPM. We next consider the setting when buyers as well as the seller are risk averse, and the objective is to maximize the seller's expected utility. We design a truthful-in-expectation mechanism whose utility is a $(1-1/e -\eps)3$-approximation to the optimal BIC mechanism under two mild assumptions. Our mechanism consists of multiple rounds that processes each buyer in a round with small probability. Lastly, we consider the problem of revenue maximization for a risk neutral seller in presence of risk averse buyers, and give a poly-time algorithm to design an optimal mechanism for the seller.
Collections
Sign up for free to add this paper to one or more collections.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.