RLOps: Development Life-cycle of Reinforcement Learning Aided Open RAN (2111.06978v2)
Abstract: Radio access network (RAN) technologies continue to evolve, with Open RAN gaining the most recent momentum. In the O-RAN specifications, the RAN intelligent controllers (RICs) are software-defined orchestration and automation functions for the intelligent management of RAN. This article introduces principles for ML, in particular, reinforcement learning (RL) applications in the O-RAN stack. Furthermore, we review the state-of-the-art research in wireless networks and cast it onto the RAN framework and the hierarchy of the O-RAN architecture. We provide a taxonomy for the challenges faced by ML/RL models throughout the development life-cycle: from the system specification to production deployment (data acquisition, model design, testing and management, etc.). To address the challenges, we integrate a set of existing MLOps principles with unique characteristics when RL agents are considered. This paper discusses a systematic model development, testing and validation life-cycle, termed: RLOps. We discuss fundamental parts of RLOps, which include: model specification, development, production environment serving, operations monitoring and safety/security. Based on these principles, we propose the best practices for RLOps to achieve an automated and reproducible model development process. At last, a holistic data analytics platform rooted in the O-RAN deployment is designed and implemented, aiming to embrace and fulfil the aforementioned principles and best practices of RLOps.
- Peizheng Li (34 papers)
- Jonathan Thomas (4 papers)
- Xiaoyang Wang (134 papers)
- Ahmed Khalil (7 papers)
- Abdelrahim Ahmad (5 papers)
- Rui Inacio (5 papers)
- Shipra Kapoor (8 papers)
- Arjun Parekh (7 papers)
- Angela Doufexi (21 papers)
- Arman Shojaeifard (23 papers)
- Robert Piechocki (30 papers)