Papers
Topics
Authors
Recent
Detailed Answer
Quick Answer
Concise responses based on abstracts only
Detailed Answer
Well-researched responses based on abstracts and relevant paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses
Gemini 2.5 Flash
Gemini 2.5 Flash 37 tok/s
Gemini 2.5 Pro 41 tok/s Pro
GPT-5 Medium 10 tok/s Pro
GPT-5 High 15 tok/s Pro
GPT-4o 84 tok/s Pro
Kimi K2 198 tok/s Pro
GPT OSS 120B 448 tok/s Pro
Claude Sonnet 4 31 tok/s Pro
2000 character limit reached

Two-level Closed Loops for RAN Slice Resources Management Serving Flying and Ground-based Cars (2208.12344v1)

Published 25 Aug 2022 in cs.NI

Abstract: Flying and ground-based cars require various services such as autonomous driving, remote pilot, infotainment, and remote diagnosis. Each service requires specific Quality of Service (QoS) and network features. Therefore, network slicing can be a solution to fulfill the requirements of various services. Some services, such as infotainment, may have similar requirements to serve flying and ground-based cars. Therefore, some slices can serve both kinds of cars. However, when network slice resource sharing is too aggressive, slices can not meet QoS requirements, where resource under-provisioning causes the violation of QoS, and resource over-provisioning causes resource under-utilization. We propose two closed loops for managing RAN slice resources for cars to address these challenges. First, we present an auction mechanism for allocating Resource Block (RB) to the tenants who provide services to the cars using slices. Second, we design one closed loop that maps slices and services of tenants to virtual Open Distributed Units (vO-DUs) and assigns RB to vO-DUs for management purposes. Third, we design another closed loop for intra-slices RB scheduling to serve cars. Fourth, we present a reward function that interconnects these two closed loops to satisfy the time-varying demands of cars at each slice while meeting QoS requirements in terms of delay. Finally, we design distributed deep reinforcement learning approach to maximize the formulated reward function. The simulation results show that our approach satisfies more than 90% vO-DUs resource constraints and network slice requirements.

Citations (5)

Summary

We haven't generated a summary for this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

Lightbulb On Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.