Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
162 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

RayNet: A Simulation Platform for Developing Reinforcement Learning-Driven Network Protocols (2302.04519v2)

Published 9 Feb 2023 in cs.NI, cs.AI, and cs.DC

Abstract: Reinforcement Learning (RL) has gained significant momentum in the development of network protocols. However, RL-based protocols are still in their infancy, and substantial research is required to build deployable solutions. Developing a protocol based on RL is a complex and challenging process that involves several model design decisions and requires significant training and evaluation in real and simulated network topologies. Network simulators offer an efficient training environment for RL-based protocols, because they are deterministic and can run in parallel. In this paper, we introduce \textit{RayNet}, a scalable and adaptable simulation platform for the development of RL-based network protocols. RayNet integrates OMNeT++, a fully programmable network simulator, with Ray/RLlib, a scalable training platform for distributed RL. RayNet facilitates the methodical development of RL-based network protocols so that researchers can focus on the problem at hand and not on implementation details of the learning aspect of their research. We developed a simple RL-based congestion control approach as a proof of concept showcasing that RayNet can be a valuable platform for RL-based research in computer networks, enabling scalable training and evaluation. We compared RayNet with \textit{ns3-gym}, a platform with similar objectives to RayNet, and showed that RayNet performs better in terms of how fast agents can collect experience in RL environments.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (63)
  1. Classic meets modern: A pragmatic learning-based congestion control for the internet. In Proceedings of ACM SIGCOMM. 632–647.
  2. TCP-Peach: a new congestion control scheme for satellite IP networks. IEEE/ACM Transactions on networking 9, 3 (2001), 307–321.
  3. Mohammad Alizadeh et al. 2010. Data center TCP (DCTCP). In Proceedings of ACM SIGCOMM. 63–74.
  4. Nyothiri Aung et al. 2023. VeSoNet: Traffic-Aware Content Caching for Vehicular Social Networks Using Deep Reinforcement Learning. IEEE Transactions on Intelligent Transportation Systems 24, 8 (2023), 8638–8649.
  5. Neuronlike adaptive elements that can solve difficult learning control problems. IEEE Transactions on Systems, Man, and Cybernetics 5 (1983), 834–846.
  6. TCP Vegas: New techniques for congestion detection and avoidance. In Proceedings ofACM SIGCOMM. 24–35.
  7. Greg Brockman et al. 2016. OpenAI Gym. arXiv:arXiv:1606.01540
  8. PCC: Re-architecting congestion control for consistent high performance. In Proceedings of USENIX NSDI. 395–408.
  9. PCC Vivace:Online-Learning Congestion Control. In Proceedings of USENIX NSDI. 343–356.
  10. Using feedback in collaborative reinforcement learning to adaptively optimize MANET routing. IEEE Transactions on Systems, Man, and Cybernetics - Part A: Systems and Humans 35, 3 (2005), 360–372.
  11. Implementing Reinforcement Learning Datacenter Congestion Control in NVIDIA NICs. arXiv preprint arXiv:2207.02295 (2022).
  12. Piotr Gawłowicz and Anatolij Zubow. 2019. ns-3 meets OpenAI Gym: The playground for machine learning in networking research. In Proceedings of ACM MSWIM.
  13. CUBIC: a new TCP-friendly high-speed TCP variant. ACM SIGOPS operating systems review 42, 5 (2008), 64–74.
  14. Soft actor-critic: Off-policy maximum entropy deep reinforcement learning with a stochastic actor. In Proceedings of ICML. 1861–1870.
  15. Integrated networking, caching, and computing for connected vehicles: A deep reinforcement learning approach. IEEE Transactions on Vehicular Technology 67, 1 (2017), 44–55.
  16. Dan Horgan et al. 2018. Distributed prioritized experience replay. arXiv preprint arXiv:1803.00933 (2018).
  17. QARC: Video quality aware rate control for real-time video streaming based on deep reinforcement learning. In Proceedings of ACM Multimedia. 1208–1216.
  18. A deep reinforcement learning perspective on Internet congestion control. In Proceedings of ICML. 3050–3059.
  19. Multi-Agent Reinforcement Learning for Efficient Content Caching in Mobile D2D Networks. IEEE Transactions on Wireless Communications 18, 3 (2019), 1610–1622.
  20. Leonard Kleinrock. 2018. Internet congestion control using the power metric: Keep the pipe just full, but no fuller. Ad hoc networks 80 (2018), 142–157.
  21. TCP Westwood+ enhancement in high-speed long-distance networks. In Proceedings of IEEE ICC. 710–715.
  22. Self-adaptive power control with deep reinforcement learning for millimeter-wave Internet-of-vehicles video caching. Journal of Communications and Networks 22, 4 (2020), 326–337.
  23. A deep reinforcement learning based congestion control mechanism for NDN. In Proceedings of IEEE ICC. 1–7.
  24. Eric Liang et al. 2018. RLlib: Abstractions for distributed reinforcement learning. In Proceedings of ICML. 3053–3062.
  25. Timothy P Lillicrap et al. 2015. Continuous control with deep reinforcement learning. arXiv preprint arXiv:1509.02971 (2015).
  26. Zoubir Mammeri. 2019. Reinforcement learning based routing in networks: Review and classification of approaches. IEEE Access 7 (2019), 55916–55950.
  27. Neural adaptive video streaming with pensieve. In Proceedings of ACM SIGCOMM. 197–210.
  28. TCP Westwood: Bandwidth estimation for enhanced transport over wireless links. In Proceedings of ACM MobiCom. 287–297.
  29. Nicholas Mastronarde and Mihaela van der Schaar. 2011. Fast Reinforcement Learning for Energy-Efficient Wireless Communication. IEEE Transactions on Signal Processing 59, 12 (2011), 6262–6266.
  30. Radhika Mittal et al. 2015. TIMELY: RTT-based congestion control for the datacenter. ACM SIGCOMM Computer Communication Review 45, 4 (2015), 537–550.
  31. Volodymyr Mnih et al. 2013. Playing Atari with deep reinforcement learning. arXiv preprint arXiv:1312.5602 (2013).
  32. Philipp Moritz et al. 2018. Ray: A distributed framework for emerging AI applications. In Proceedings of USENIX OSDI. 561–577.
  33. Oshri Naparstek and Kobi Cohen. 2018. Deep multi-user reinforcement learning for distributed dynamic spectrum access. IEEE Transactions on Wireless Communications 18, 1 (2018), 310–323.
  34. Ali Nasehzadeh and Ping Wang. 2020. A Deep Reinforcement Learning-Based Caching Strategy for Internet of Things. In Proceedings of IEEE/CIC ICCC. 969–974.
  35. Ravi Netravali et al. 2015. Mahimahi: Accurate Record-and-Replay for HTTP. In Proceedings of USENIX ATC. 417–429.
  36. Thanh Thi Nguyen and Vijay Janapa Reddi. 2019. Deep reinforcement learning for cyber security. IEEE Transactions on Neural Networks and Learning Systems (2019).
  37. Christoph Paasch and Olivier Bonaventure. 2014. Multipath TCP. Commun. ACM 57, 4 (2014), 51–57.
  38. Deep Reinforcement Learning for Cooperative Content Caching in Vehicular Edge Computing and Networks. IEEE Internet of Things Journal 7, 1 (2020), 247–257.
  39. Owl: congestion control with partially invisible networks via reinforcement learning. In Proceedings of IEEE INFOCOM. 1–10.
  40. Proximal policy optimization algorithms. arXiv preprint arXiv:1707.06347 (2017).
  41. Improving efficiency-friendliness tradeoffs of TCP in wired-wireless combined networks. In IEEE International Conference on Communications, 2005. ICC 2005. 2005, Vol. 5. IEEE, 3548–3552.
  42. An experimental study of the learnability of congestion control. ACM SIGCOMM Computer Communication Review 44, 4 (2014), 479–490.
  43. Compound TCP: A scalable and TCP-friendly congestion control for high-speed networks. Proceedings of PFLDnet 2006 (2006).
  44. A deep-reinforcement learning approach for software-defined networking routing optimization. arXiv preprint arXiv:1709.07080 (2017).
  45. SINET: Enabling Scalable Network Routing with Deep Reinforcement Learning on Partial Nodes. In Proceedings of ACM SIGCOMM (Posters and Demos). 88–89.
  46. Richard S Sutton and Andrew G Barto. 2018. Reinforcement learning: An introduction. MIT press.
  47. REFWA: An efficient and fair congestion control scheme for LEO satellite networks. IEEE/ACM Transactions on Networking 14, 5 (2006), 1031–1044.
  48. Reinforcement learning for datacenter congestion control. In Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 36. 12615–12621.
  49. Aashma Uprety and Danda B Rawat. 2020. Reinforcement learning for iot security: A comprehensive survey. IEEE Internet of Things Journal 8, 11 (2020), 8693–8706.
  50. András Varga and Rudolf Hornig. 2008. An overview of the OMNeT++ simulation environment. In Proceedings of the 1st international conference on Simulation tools and techniques for communications, networks and systems & workshops. 1–10.
  51. INCdeep: Intelligent Network Coding with Deep Reinforcement Learning. In Proceedings of IEEE INFOCOM. 1–10.
  52. Deep reinforcement learning for dynamic multichannel access in wireless networks. IEEE Transactions on Cognitive Communications and Networking 4, 2 (2018), 257–265.
  53. Keith Winstein and Hari Balakrishnan. 2013. Tcp ex machina: Computer-generated congestion control. ACM SIGCOMM Computer Communication Review 43, 4 (2013), 123–134.
  54. ICTCP: Incast Congestion Control for TCP in Data-Center Networks. IEEE/ACM Transactions on Networking 21, 2 (2013), 345–358.
  55. A Deep-Reinforcement Learning Approach for SDN Routing Optimization. In Proceedings of CSAE.
  56. Experience-driven congestion control: When multi-path TCP meets deep reinforcement learning. IEEE Journal on Selected Areas in Communications 37, 6 (2019), 1325–1336.
  57. An Actor-Critic-Based Transfer Learning Framework for Experience-Driven Networking. IEEE/ACM Transactions on Networking 29, 1 (2021), 360–371.
  58. DROM: Optimizing the Routing in Software-Defined Networks With Deep Reinforcement Learning. IEEE Access 6 (2018), 64533–64539.
  59. A Hybrid of Deep Reinforcement Learning and Local Search for the Vehicle Routing Problems. IEEE Transactions on Intelligent Transportation Systems 22, 11 (2021), 7208–7218.
  60. Routing for Crowd Management in Smart Cities: A Deep Reinforcement Learning Perspective. IEEE Communications Magazine 57, 4 (2019), 88–93.
  61. Deep Reinforcement Learning-Based Edge Caching in Wireless Networks. IEEE Transactions on Cognitive Communications and Networking 6, 1 (2020), 48–61.
  62. Deep reinforcement learning for mobile edge caching: Review, new features, and open issues. IEEE Network 32, 6 (2018), 50–57.
  63. Caching Transient Data for Internet of Things: A Deep Reinforcement Learning Approach. IEEE Internet of Things Journal 6, 2 (2019), 2074–2083.
Citations (2)

Summary

We haven't generated a summary for this paper yet.