Emergent Mind

On the Performance of Large Loss Systems with Adaptive Multiserver Jobs

(2309.00060)
Published Aug 31, 2023 in math.PR and cs.PF

Abstract

In this paper, we study systems where each job or request can be split into a flexible number of sub-jobs up to a maximum limit. The number of sub-jobs a job is split into depends on the number of available servers found upon its arrival. All sub-jobs of a job are then processed in parallel at different servers leading to a linear speed-up of the job. We refer to such jobs as {\em adaptive multi-server jobs}. We study the problem of optimal assignment of such jobs when each server can process at most one sub-job at any given instant and there is no waiting room in the system. We assume that, upon arrival, a job can only access a randomly sampled subset of $k(n)$ servers from a total of $n$ servers, and the number of sub-jobs is determined based on the number of idle servers within the sampled subset. We analyze the steady-state performance of the system when system load varies according to $\lambda(n) =1 - \beta n{-\alpha}$ for $\alpha \in [0,1)$, and $\beta \geq 0$. Our interest is to find how large the subset $k(n)$ should be in order to have zero blocking and maximum speed-up in the limit as $n \to \infty$. We first characterize the system's performance when the jobs have access to the full system, i.e., $k(n)=n$. In this setting, we show that the blocking probability approaches to zero at the rate $O(1/\sqrt{n})$ and the mean response time of accepted jobs approaches to its minimum achievable value at rate $O(1/n)$. We then consider the case where the jobs only have access to subset of servers, i.e., $k(n) < n$. We show that as long as $k(n)=\omega(n\alpha)$, the same asymptotic performance can be achieved as in the case with full system access. In particular, for $k(n)=\Theta(n\alpha \log n)$, we show that both the blocking probability and the mean response time approach to their desired limits at rate $O(n{-(1-\alpha)/2})$.

We're not able to analyze this paper right now due to high demand.

Please check back later (sorry!).

Generate a summary of this paper on our Pro plan:

We ran into a problem analyzing this paper.

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.