Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
139 tokens/sec
GPT-4o
47 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Online Multi-Task Offloading for Semantic-Aware Edge Computing Systems (2407.11018v1)

Published 28 Jun 2024 in cs.NI and eess.SP

Abstract: Mobile edge computing (MEC) provides low-latency offloading solutions for computationally intensive tasks, effectively improving the computing efficiency and battery life of mobile devices. However, for data-intensive tasks or scenarios with limited uplink bandwidth, network congestion might occur due to massive simultaneous offloading nodes, increasing transmission latency and affecting task performance. In this paper, we propose a semantic-aware multi-modal task offloading framework to address the challenges posed by limited uplink bandwidth. By introducing a semantic extraction factor, we balance the relationship among transmission latency, computation energy consumption, and task performance. To measure the offloading performance of multi-modal tasks, we design a unified and fair quality of experience (QoE) metric that includes execution latency, energy consumption, and task performance. Lastly, we formulate the optimization problem as a Markov decision process (MDP) and exploit the multi-agent proximal policy optimization (MAPPO) reinforcement learning algorithm to jointly optimize the semantic extraction factor, communication resources, and computing resources to maximize overall QoE. Experimental results show that the proposed method achieves a reduction in execution latency and energy consumption of 18.1% and 12.9%, respectively compared with the semantic-unaware approach. Moreover, the proposed approach can be easily extended to models with different user preferences.

Summary

We haven't generated a summary for this paper yet.