Emergent Mind

Abstract

Motivated by practical applications, recent works have considered maximization of sums of a submodular function $g$ and a linear function $\ell$. Almost all such works, to date, studied only the special case of this problem in which $g$ is also guaranteed to be monotone. Therefore, in this paper we systematically study the simplest version of this problem in which $g$ is allowed to be non-monotone, namely the unconstrained variant, which we term Regularized Unconstrained Submodular Maximization (RegularizedUSM). Our main algorithmic result is the first non-trivial guarantee for general RegularizedUSM. For the special case of RegularizedUSM in which the linear function $\ell$ is non-positive, we prove two inapproximability results, showing that the algorithmic result implied for this case by previous works is not far from optimal. Finally, we reanalyze the known Double Greedy algorithm to obtain improved guarantees for the special case of RegularizedUSM in which the linear function $\ell$ is non-negative; and we complement these guarantees by showing that it is not possible to obtain (1/2, 1)-approximation for this case (despite intuitive arguments suggesting that this approximation guarantee is natural).

We're not able to analyze this paper right now due to high demand.

Please check back later (sorry!).

Generate a summary of this paper on our Pro plan:

We ran into a problem analyzing this paper.

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.