Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Deep Reinforcement Learning for Online Latency Aware Workload Offloading in Mobile Edge Computing (2209.05191v2)

Published 30 Aug 2022 in cs.DC

Abstract: Owing to the resource-constrained feature of Internet of Things (IoT) devices, offloading tasks from IoT devices to the nearby mobile edge computing (MEC) servers can not only save the energy of IoT devices but also reduce the response time of executing the tasks. However, offloading a task to the nearest MEC server may not be the optimal solution due to the limited computing resources of the MEC server. Thus, jointly optimizing the offloading decision and resource management is critical, but yet to be explored. Here, offloading decision refers to where to offload a task and resource management implies how much computing resource in an MEC server is allocated to a task. By considering the waiting time of a task in the communication and computing queues (which are ignored by most of the existing works) as well as tasks priorities, we propose the \ul{D}eep reinforcement l\ul{E}arning based offloading de\ul{C}ision and r\ul{E}source manageme\ul{NT} (DECENT) algorithm, which leverages the advantage actor critic method to optimize the offloading decision and computing resource allocation for each arriving task in real-time such that the cumulative weighted response time can be minimized. The performance of DECENT is demonstrated via different experiments.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Zeinab Akhavan (1 paper)
  2. Mona Esmaeili (4 papers)
  3. Babak Badnava (8 papers)
  4. Mohammad Yousefi (1 paper)
  5. Xiang Sun (26 papers)
  6. Michael Devetsikiotis (7 papers)
  7. Payman Zarkesh-Ha (3 papers)
Citations (7)

Summary

We haven't generated a summary for this paper yet.