Papers
Topics
Authors
Recent
Detailed Answer
Quick Answer
Concise responses based on abstracts only
Detailed Answer
Well-researched responses based on abstracts and relevant paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses
Gemini 2.5 Flash
Gemini 2.5 Flash 37 tok/s
Gemini 2.5 Pro 41 tok/s Pro
GPT-5 Medium 10 tok/s Pro
GPT-5 High 15 tok/s Pro
GPT-4o 84 tok/s Pro
Kimi K2 198 tok/s Pro
GPT OSS 120B 448 tok/s Pro
Claude Sonnet 4 31 tok/s Pro
2000 character limit reached

Safety-Aware Multi-Agent Learning for Dynamic Network Bridging (2404.01551v2)

Published 2 Apr 2024 in cs.MA, cs.AI, cs.LG, cs.NI, cs.SY, and eess.SY

Abstract: Addressing complex cooperative tasks in safety-critical environments poses significant challenges for multi-agent systems, especially under conditions of partial observability. We focus on a dynamic network bridging task, where agents must learn to maintain a communication path between two moving targets. To ensure safety during training and deployment, we integrate a control-theoretic safety filter that enforces collision avoidance through local setpoint updates. We develop and evaluate multi-agent reinforcement learning safety-informed message passing, showing that encoding safety filter activations as edge-level features improves coordination. The results suggest that local safety enforcement and decentralized learning can be effectively combined in distributed multi-agent tasks.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (20)
  1. I. ElSayed-Aly, S. Bharadwaj, C. Amato, R. Ehlers, U. Topcu, and L. Feng, “Safe multi-agent reinforcement learning via shielding,” arXiv preprint arXiv:2101.11196, 2021.
  2. L. Dai, Q. Cao, Y. Xia, and Y. Gao, “Distributed mpc for formation of multi-agent systems with collision avoidance and obstacle avoidance,” Journal of the Franklin Institute, vol. 354, no. 4, pp. 2068–2085, 2017.
  3. Y. Li, N. Li, H. E. Tseng, A. Girard, D. Filev, and I. Kolmanovsky, “Safe reinforcement learning using robust action governor,” in Learning for Dynamics and Control, pp. 1093–1104, PMLR, 2021.
  4. Z. Gao, G. Yang, and A. Prorok, “Online control barrier functions for decentralized multi-agent navigation,” in 2023 International Symposium on Multi-Robot and Multi-Agent Systems (MRS), pp. 107–113, IEEE, 2023.
  5. N. Kochdumper, H. Krasowski, X. Wang, S. Bak, and M. Althoff, “Provably safe reinforcement learning via action projection using reachability analysis and polynomial zonotopes,” IEEE Open Journal of Control Systems, vol. 2, pp. 79–92, 2023.
  6. R. Romagnoli, B. H. Krogh, D. de Niz, A. D. Hristozov, and B. Sinopoli, “Software rejuvenation for safe operation of cyber–physical systems in the presence of run-time cyberattacks,” IEEE Transactions on Control Systems Technology, 2023.
  7. R. Galliera, T. Möhlenhof, A. Amato, D. Duran, K. B. Venable, and N. Suri, “Distributed autonomous swarm formation for dynamic network bridging,” in (To Appear) The 17th International Workshop on Networked Robotics and Communication Systems (IEEE INFOCOM), 2024.
  8. H. K. Khalil, Nonlinear systems; 3rd ed. Upper Saddle River, NJ: Prentice-Hall, 2002.
  9. A. Chen, K. Mitsopoulos, and R. Romagnoli, “Reinforcement learning-based optimal control and software rejuvenation for safe and efficient uav navigation,” in 2023 62nd IEEE Conference on Decision and Control (CDC), pp. 7527–7532, IEEE, 2023.
  10. D. S. Bernstein, R. Givan, N. Immerman, and S. Zilberstein, “The complexity of decentralized control of markov decision processes,” Mathematics of operations research, vol. 27, no. 4, pp. 819–840, 2002.
  11. F. A. Oliehoek and C. Amato, A Concise Introduction to Decentralized POMDPs. SpringerBriefs in Intelligent Systems, Springer International Publishing, 2016.
  12. F. A. Oliehoek, S. Whiteson, M. T. Spaan, et al., “Approximate solutions for factored dec-pomdps with many agents.,” in AAMAS, pp. 563–570, 2013.
  13. R. Lowe, Y. I. Wu, A. Tamar, J. Harb, O. Pieter Abbeel, and I. Mordatch, “Multi-agent actor-critic for mixed cooperative-competitive environments,” Advances in neural information processing systems, vol. 30, 2017.
  14. I. Gilitschenski and U. D. Hanebeck, “A robust computational test for overlap of two arbitrary-dimensional ellipsoids in fault-detection of kalman filters,” in 2012 15th International Conference on Information Fusion, pp. 396–401, IEEE, 2012.
  15. MIT Press, 2023.
  16. J. Jiang, C. Dun, T. Huang, and Z. Lu, “Graph convolutional reinforcement learning,” in International Conference on Learning Representations, 2020.
  17. P. Veličković, G. Cucurull, A. Casanova, A. Romero, P. Liò, and Y. Bengio, “Graph attention networks,” in International Conference on Learning Representations, 2018.
  18. M. J. Hausknecht and P. Stone, “Deep recurrent q-learning for partially observable mdps,” in 2015 AAAI Fall Symposia, Arlington, Virginia, USA, November 12-14, 2015, pp. 29–37, AAAI Press, 2015.
  19. Z. Wang, T. Schaul, M. Hessel, H. Van Hasselt, M. Lanctot, and N. De Freitas, “Dueling network architectures for deep reinforcement learning,” in Proceedings of the 33rd International Conference on International Conference on Machine Learning - Volume 48, ICML’16, p. 1995–2003, JMLR.org, 2016.
  20. F. Pardo, A. Tavakoli, V. Levdik, and P. Kormushev, “Time limits in reinforcement learning,” in Proceedings of the 35th International Conference on Machine Learning (J. Dy and A. Krause, eds.), vol. 80 of Proceedings of Machine Learning Research, pp. 4045–4054, PMLR, 10–15 Jul 2018.

Summary

We haven't generated a summary for this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

Lightbulb On Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.

Don't miss out on important new AI/ML research

See which papers are being discussed right now on X, Reddit, and more:

“Emergent Mind helps me see which AI papers have caught fire online.”

Philip

Philip

Creator, AI Explained on YouTube