Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
139 tokens/sec
GPT-4o
47 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

An Efficient Max-Min Resource Allocator and Task Scheduling Algorithm in Cloud Computing Environment (1611.08864v1)

Published 27 Nov 2016 in cs.DC

Abstract: Cloud computing is a new archetype that provides dynamic computing services to cloud users through the support of datacenters that employs the services of datacenter brokers which discover resources and assign them Virtually. The focus of this research is to efficiently optimize resource allocation in the cloud by exploiting the Max-Min scheduling algorithm and enhancing it to increase efficiency in terms of completion time (makespan). This is key to enhancing the performance of cloud scheduling and narrowing the performance gap between cloud service providers and cloud resources consumers/users. The current Max-Min algorithm selects tasks with maximum execution time on a faster available machine or resource that is capable of giving minimum completion time. The concern of this algorithm is to give priority to tasks with maximum execution time first before assigning those with the minimum execution time for the purpose of minimizing makespan. The drawback of this algorithm is that, the execution of tasks with maximum execution time first may increase the makespan, and leads to a delay in executing tasks with minimum execution time if the number of tasks with maximum execution time exceeds that of tasks with minimum execution time, hence the need to improve it to mitigate the delay in executing tasks with minimum execution time. CloudSim is used to compare the effectiveness of the improved Max-Min algorithm with the traditional one. The experimented results show that the improved algorithm is efficient and can produce better makespan than Max-Min and DataAware.

Citations (16)

Summary

We haven't generated a summary for this paper yet.