Emergent Mind

Abstract

In the moldable job scheduling problem one has to assign a set of $n$ jobs to $m$ machines, in order to minimize the time it takes to process all jobs. Each job is moldable, so it can be assigned not only to one but any number of the equal machines. We assume that the work of each job is monotone and that jobs can be placed non-contiguously. In this work we present a $(\frac 3 2 + \epsilon)$-approximation algorithm with a worst-case runtime of ${O(n \log2(\frac 1 \epsilon + \frac {\log (\epsilon m)} \epsilon) + \frac{n}{\epsilon} \log(\frac 1 \epsilon) {\log (\epsilon m)})}$ when $m\le 16n$. This is an improvement over the best known algorithm of the same quality by a factor of $\frac 1 \epsilon$ and several logarithmic dependencies. We complement this result with an improved FPTAS with running time $O(n \log2(\frac 1 \epsilon + \frac {\log (\epsilon m)} \epsilon))$ for instances with many machines $m> 8\frac n \epsilon$. This yields a $\frac 3 2$-approximation with runtime $O(n \log2(\log m))$ when $m>16n$. We achieve these results through one new core observation: In an approximation setting one does not need to consider all $m$ possible allotments for each job. We will show that we can reduce the number of relevant allotments for each job from $m$ to $O(\frac 1 \epsilon + \frac {\log (\epsilon m)}{\epsilon})$. Using this observation immediately yields the improved FPTAS. For the other result we use a reduction to the knapsack problem first introduced by Mouni\'e, Rapine and Trystram. We use the reduced number of machines to give a new elaborate rounding scheme and define a modified version of this this knapsack instance. This in turn allows for the application of a convolution based algorithm by Axiotis and Tzamos. We further back our theoretical results through a practical implementation and compare our algorithm to the previously known best result.

We're not able to analyze this paper right now due to high demand.

Please check back later (sorry!).

Generate a summary of this paper on our Pro plan:

We ran into a problem analyzing this paper.

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.