Emergent Mind

Abstract

AI-driven zero-touch network slicing (NS) is a new paradigm enabling the automation of resource management and orchestration (MANO) in multi-tenant beyond 5G (B5G) networks. In this paper, we tackle the problem of cloud-RAN (C-RAN) joint slice admission control and resource allocation by first formulating it as a Markov decision process (MDP). We then invoke an advanced continuous deep reinforcement learning (DRL) method called twin delayed deep deterministic policy gradient (TD3) to solve it. In this intent, we introduce a multi-objective approach to make the central unit (CU) learn how to re-configure computing resources autonomously while minimizing latency, energy consumption and virtual network function (VNF) instantiation cost for each slice. Moreover, we build a complete 5G C-RAN network slicing environment using OpenAI Gym toolkit where, thanks to its standardized interface, it can be easily tested with different DRL schemes. Finally, we present extensive experimental results to showcase the gain of TD3 as well as the adopted multi-objective strategy in terms of achieved slice admission success rate, latency, energy saving and CPU utilization.

We're not able to analyze this paper right now due to high demand.

Please check back later (sorry!).

Generate a summary of this paper on our Pro plan:

We ran into a problem analyzing this paper.

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.