Papers
Topics
Authors
Recent
Detailed Answer
Quick Answer
Concise responses based on abstracts only
Detailed Answer
Well-researched responses based on abstracts and relevant paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses
Gemini 2.5 Flash
Gemini 2.5 Flash 71 tok/s
Gemini 2.5 Pro 52 tok/s Pro
GPT-5 Medium 18 tok/s Pro
GPT-5 High 15 tok/s Pro
GPT-4o 101 tok/s Pro
Kimi K2 196 tok/s Pro
GPT OSS 120B 467 tok/s Pro
Claude Sonnet 4 37 tok/s Pro
2000 character limit reached

Resource Scheduling in Edge Computing: A Survey (2108.08059v1)

Published 18 Aug 2021 in cs.NI and eess.SP

Abstract: With the proliferation of the Internet of Things (IoT) and the wide penetration of wireless networks, the surging demand for data communications and computing calls for the emerging edge computing paradigm. By moving the services and functions located in the cloud to the proximity of users, edge computing can provide powerful communication, storage, networking, and communication capacity. The resource scheduling in edge computing, which is the key to the success of edge computing systems, has attracted increasing research interests. In this paper, we survey the state-of-the-art research findings to know the research progress in this field. Specifically, we present the architecture of edge computing, under which different collaborative manners for resource scheduling are discussed. Particularly, we introduce a unified model before summarizing the current works on resource scheduling from three research issues, including computation offloading, resource allocation, and resource provisioning. Based on two modes of operation, i.e., centralized and distributed modes, different techniques for resource scheduling are discussed and compared. Also, we summarize the main performance indicators based on the surveyed literature. To shed light on the significance of resource scheduling in real-world scenarios, we discuss several typical application scenarios involved in the research of resource scheduling in edge computing. Finally, we highlight some open research challenges yet to be addressed and outline several open issues as the future research direction.

Citations (270)
List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

Summary

  • The paper presents a comprehensive review of resource scheduling frameworks within a three-tier edge computing architecture, focusing on computation offloading and allocation methods.
  • It examines both centralized and distributed approaches, detailing techniques such as convex optimization, heuristic methods, and auction mechanisms for resource management.
  • The study highlights key challenges and future directions, including flexible computation models, security measures, and AI-driven solutions for scalable IoT applications.

Insights on "Resource Scheduling in Edge Computing: A Survey"

The paper "Resource Scheduling in Edge Computing: A Survey" presents a comprehensive review of methodologies and frameworks for resource scheduling within the context of edge computing. Recognizing the imperative of meeting the burgeoning demands of Internet of Things (IoT) applications, the authors, Quyuan Luo et al., meticulously analyze and summarize significant developments in this research landscape. This survey offers a kernel for understanding the plethora of strategies available to optimize the provisioning and allocation of resources across a tripartite architecture comprising the thing, edge, and cloud layers.

Key Aspects of the Survey

  1. Architectural Framework:
    • The survey delineates a three-tier architecture for edge computing: the thing layer (end devices), the edge layer (intermediate nodes capable of processing), and the cloud layer (traditional data centers). The identification of collaboration forms—things-edge, things-edge-cloud, edge-edge, and edge-cloud—is central to understanding the diverse models of interaction in task processing and workload distribution.
  2. Research Issues:
    • Computation offloading, resource allocation, and resource provisioning are foregrounded as the three pivotal areas of concern. The authors present a unified framework to describe these processes, citing energy consumption, latency, cost, utility, profit, and resource utilization as critical performance indicators.
    • They offer substantial insights into the nuances of computation offloading, including directional approaches (device-to-edge, edge-to-cloud) and granularity perspectives (binary and partial offloading).
  3. Methodological Techniques:
    • The survey examines both centralized and distributed methodologies in resource scheduling. Centralized methods utilize convex optimization, approximation algorithms, and heuristic approaches to manage resource constraints, while distributed methods leverage game theory, auction mechanisms, and federated learning to foster decentralized decision-making.
  4. Application Scenarios:
    • Highlighted application domains include Unmanned Aerial Vehicles (UAVs), Connected and Autonomous Vehicles (CAVs), video services, smart cities, smart health, smart manufacturing, and smart homes. These sectors exhibit unique requirements and constraints that resource scheduling strategies must address, underscoring the need for adaptive and context-aware solutions.
  5. Challenges and Future Directions:
    • Open challenges are elucidated, such as developing flexible computation models and architectural enhancements, managing heterogeneity within distributed systems, enforcing security and privacy measures, and dynamically provisioning resources to meet varying workloads. The authors also point out the necessity for real-world evaluations and test environments to substantiate the scalability and efficacy of proposed frameworks.

Implications and Speculations

The survey illuminates the symbiotic relationship between edge computing and IoT, showcasing how efficient resource scheduling can enhance system responsiveness and user experience by reducing latency and conserving energy. The authors speculate that with the burgeoning proliferation of data, resource scheduling will evolve, especially with advancements in AI and machine learning methodologies.

Anticipated future developments in edge computing encompass the integration of serverless architectures, employing blockchain technology for secure and transparent resource management, and the adoption of network slicing for refined resource allocation. Additionally, collaborative paradigms like federated learning promise to decentralize model training further while maintaining user data privacy.

In conclusion, the paper not only synthesizes current research insights but also chart a prospective trajectory for evolving edge computing paradigms. It serves as a significant resource for researchers seeking to explore the intricacies of resource scheduling in edge-centric environments, laying the groundwork for subsequent innovations in the field of pervasive computing.