RFRL Gym: A Reinforcement Learning Testbed for Cognitive Radio Applications (2401.05406v1)
Abstract: Radio Frequency Reinforcement Learning (RFRL) is anticipated to be a widely applicable technology in the next generation of wireless communication systems, particularly 6G and next-gen military communications. Given this, our research is focused on developing a tool to promote the development of RFRL techniques that leverage spectrum sensing. In particular, the tool was designed to address two cognitive radio applications, specifically dynamic spectrum access and jamming. In order to train and test reinforcement learning (RL) algorithms for these applications, a simulation environment is necessary to simulate the conditions that an agent will encounter within the Radio Frequency (RF) spectrum. In this paper, such an environment has been developed, herein referred to as the RFRL Gym. Through the RFRL Gym, users can design their own scenarios to model what an RL agent may encounter within the RF spectrum as well as experiment with different spectrum sensing techniques. Additionally, the RFRL Gym is a subclass of OpenAI gym, enabling the use of third-party ML/RL Libraries. We plan to open-source this codebase to enable other researchers to utilize the RFRL Gym to test their own scenarios and RL algorithms, ultimately leading to the advancement of RL research in the wireless communications domain. This paper describes in further detail the components of the Gym, results from example scenarios, and plans for future additions. Index Terms-machine learning, reinforcement learning, wireless communications, dynamic spectrum access, OpenAI gym
- P. Tilghman, “Will rule the airwaves: A darpa grand challenge seeks autonomous radios to manage the wireless spectrum,” IEEE Spectrum, vol. 56, pp. 28–33, 06 2019.
- “Summary of ctia’s annual wireless industry survey,” https://api.ctia.org/wp-content/uploads/2022/09/Summary-of-CTIAs-Wireless-Industry-Survey-2022.pdf.
- “Spectrum allocation in the united states 2022,” https://api.ctia.org/wp-content/uploads/2022/09/Spectrum-Allocation-in-the-United-States-2022.09.pdf.
- J. Mitola and G. Maguire, “Cognitive radio: making software radios more personal,” IEEE Personal Communications, vol. 6, no. 4, pp. 13–18, 1999.
- R. Awati, “What is cognitive radio (cr) and how does it work?” Jun 2021. [Online]. Available: https://www.techtarget.com/searchnetworking/definition/cognitive-radio
- A. G. Fragkiadakis, E. Z. Tragos, and I. G. Askoxylakis, “A survey on security threats and detection techniques in cognitive radio networks,” IEEE Communications Surveys and Tutorials, vol. 15, no. 1, p. 428–445, 2013.
- S. Singh and A. Trivedi, “Anti-jamming in cognitive radio networks using reinforcement learning algorithms,” in 2012 Ninth International Conference on Wireless and Optical Communications Networks (WOCN), 2012, pp. 1–5.
- F. Slimeni, Z. Chtourou, and A. B. Amor, “Reinforcement learning based anti-jamming cognitive radio channel selection,” in 2020 4th International Conference on Advanced Systems and Emergent Technologies (IC_ASET), 2020, pp. 431–435.
- S. Machuzak and S. K. Jayaweera, “Reinforcement learning based anti-jamming with wideband autonomous cognitive radios,” in 2016 IEEE/CIC International Conference on Communications in China (ICCC), 2016, pp. 1–5.
- Y. Huang, C. Xu, C. Zhang, M. Hua, and Z. Zhang, “An overview of intelligent wireless communications using deep reinforcement learning,” Journal of Communications and Information Networks, vol. 4, pp. 15 – 29, 2019.
- A. Zubow, S. Rosler, P. Gawłowicz, and F. Dressler, “Grgym: When gnu radio goes to (ai) gym,” 2021.
- A. Zubow, S. Roesler, P. Gawlowicz, and F. Dressler, “Grgym: A playground for research on rl/ai enhanced wireless networks,” in European Wireless 2022; 27th European Wireless Conference, 2022, pp. 1–4.
- J. Jagannath, K. Hamedani, C. Farquhar, K. Ramezanpour, and A. Jagannath, “Mr-inet gym: Framework for edge deployment of deep reinforcement learning on embedded software defined radio,” WiseML, 2022.
- P. Gawlowicz and A. Zubow, “ns-3 meets openai gym: The playground for machine learning in networking research,” in ACM International Conference on Modeling, Analysis and Simulation of Wireless and Mobile Systems (MSWiM), November 2019. [Online]. Available: https://bits.informatik.hu-berlin.de/ zubow/gawlowicz19_mswim.pdf
- “Gnu radio - the free & open software radio ecosystem,” https://www.gnuradio.org/, 2023.
- C. D’Eramo, D. Tateo, A. Bonarini, M. Restelli, and J. Peters, “Mushroomrl: Simplifying reinforcement learning research,” Journal of Machine Learning Research, vol. 22, no. 131, pp. 1–5, 2021. [Online]. Available: http://jmlr.org/papers/v22/18-056.html
- A. Hill, A. Raffin, M. Ernestus, A. Gleave, A. Kanervisto, R. Traore, P. Dhariwal, C. Hesse, O. Klimov, A. Nichol, M. Plappert, A. Radford, J. Schulman, S. Sidor, and Y. Wu, “Stable baselines,” https://github.com/hill-a/stable-baselines, 2018.
- J. Terry, B. Black, N. Grammel, M. Jayakumar, A. Hari, R. Sullivan, L. S. Santos, C. Dieffendahl, C. Horsch, R. Perez-Vicente et al., “Pettingzoo: Gym for multi-agent reinforcement learning,” Advances in Neural Information Processing Systems, vol. 34, pp. 15 032–15 043, 2021.
- L. J. Wong, W. C. Headley, and A. J. Michaels, “Specific emitter identification using convolutional neural network-based iq imbalance estimators,” IEEE Access, vol. 7, pp. 33 544–33 555, 2019.
- Daniel Rosen (15 papers)
- Illa Rochez (1 paper)
- Caleb McIrvin (3 papers)
- Joshua Lee (8 papers)
- Kevin D'Alessandro (1 paper)
- Max Wiecek (1 paper)
- Nhan Hoang (1 paper)
- Ramzy Saffarini (1 paper)
- Sam Philips (1 paper)
- Vanessa Jones (1 paper)
- Will Ivey (1 paper)
- Zavier Harris-Smart (1 paper)
- Zavion Harris-Smart (1 paper)
- Zayden Chin (1 paper)
- Amos Johnson (3 papers)
- Alyse M. Jones (2 papers)
- William C. Headley (14 papers)