Emergent Mind

Faster 0-1-Knapsack via Near-Convex Min-Plus-Convolution

(2305.01593)
Published May 2, 2023 in cs.DS

Abstract

We revisit the classic 0-1-Knapsack problem, in which we are given $n$ items with their weights and profits as well as a weight budget $W$, and the goal is to find a subset of items of total weight at most $W$ that maximizes the total profit. We study pseudopolynomial-time algorithms parameterized by the largest profit of any item $p{\max}$, and the largest weight of any item $w{\max}$. Our main result are algorithms for 0-1-Knapsack running in time $\tilde{O}(n\,w\max\,p\max{2/3})$ and $\tilde{O}(n\,p\max\,w\max{2/3})$, improving upon an algorithm in time $O(n\,p\max\,w\max)$ by Pisinger [J. Algorithms '99]. In the regime $p\max \approx w\max \approx n$ (and $W \approx \mathrm{OPT} \approx n2$) our algorithms are the first to break the cubic barrier $n3$. To obtain our result, we give an efficient algorithm to compute the min-plus convolution of near-convex functions. More precisely, we say that a function $f \colon [n] \mapsto \mathbf{Z}$ is $\Delta$-near convex with $\Delta \geq 1$, if there is a convex function $\breve{f}$ such that $\breve{f}(i) \leq f(i) \leq \breve{f}(i) + \Delta$ for every $i$. We design an algorithm computing the min-plus convolution of two $\Delta$-near convex functions in time $\tilde{O}(n\Delta)$. This tool can replace the usage of the prediction technique of Bateni, Hajiaghayi, Seddighin and Stein [STOC '18] in all applications we are aware of, and we believe it has wider applicability.

We're not able to analyze this paper right now due to high demand.

Please check back later (sorry!).

Generate a summary of this paper on our Pro plan:

We ran into a problem analyzing this paper.

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.