Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 134 tok/s
Gemini 2.5 Pro 41 tok/s Pro
GPT-5 Medium 25 tok/s Pro
GPT-5 High 26 tok/s Pro
GPT-4o 58 tok/s Pro
Kimi K2 194 tok/s Pro
GPT OSS 120B 427 tok/s Pro
Claude Sonnet 4.5 37 tok/s Pro
2000 character limit reached

A proactive autoscaling and energy-efficient VM allocation framework using online multi-resource neural network for cloud data center (2212.01896v1)

Published 4 Dec 2022 in cs.DC

Abstract: This work proposes an energy-efficient resource provisioning and allocation framework to meet the dynamic demands of future applications. The frequent variations in a cloud user's resource demand lead 'to the problem of excess power consumption, resource wastage, performance, and Quality-of-Service degradation. The proposed framework addresses these challenges by matching the application's predicted resource requirement with the resource capacity of VMs precisely and thereby consolidating the entire load on the minimum number of energy-efficient physical machines. The three consecutive contributions of the proposed work are: Online Multi-Resource Feed-forward Neural Network to forecast the multiple resource demands concurrently for future applications; autoscaling of VMs based on the clustering of the predicted resource requirements; allocation of the scaled VMs on the energy-efficient PMs. The integrated approach successively optimizes resource utilization, saves energy and automatically adapts to the changes in future application resource demand. The proposed framework is evaluated by using real workload traces of the benchmark Google Cluster Dataset and compared against different scenarios including energy-efficient VM placement with resource prediction only, VMP without resource prediction and autoscaling, and optimal VMP with autoscaling based on actual resource utilization. The observed results demonstrate that the proposed integrated approach achieves near-optimal performance against optimal VMP and outperforms rest of the VMPs in terms of power saving and resource utilization up to 88.5% and 21.12% respectively. In addition, the OM-FNN predictor shows better accuracy, lesser time and space complexity over a traditional single-input and single-output feed-forward neural network predictor.

Citations (80)

Summary

We haven't generated a summary for this paper yet.

Dice Question Streamline Icon: https://streamlinehq.com

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Lightbulb Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.