Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
126 tokens/sec
GPT-4o
47 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Robust and Decentralized Reinforcement Learning for UAV Path Planning in IoT Networks (2312.06250v1)

Published 11 Dec 2023 in eess.SY and cs.SY

Abstract: Unmanned aerial vehicle (UAV)-based networks and Internet of Things (IoT) are being considered as integral components of current and next-generation wireless networks. In particular, UAVs can provide IoT devices with seamless connectivity and high coverage and this can be accomplished with effective UAV path planning. In this article, we study robust and decentralized UAV path planning for data collection in IoT networks in the presence of other noncooperative UAVs and adversarial jamming attacks. We address three different practical scenarios, including single UAV path planning, UAV swarm path planning, and single UAV path planning in the presence of an intelligent mobile UAV jammer. We advocate a reinforcement learning framework for UAV path planning in these three scenarios under practical constraints. The simulation results demonstrate that with learning-based path planning, the UAVs can complete their missions with high success rates and data collection rates. In addition, the UAVs can adapt and execute different trajectories as a defensive measure against the intelligent jammer.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (15)
  1. P. Yang, Y. Xiao, M. Xiao, and S. Li, “6G wireless communications: Vision and potential techniques,” IEEE network, vol. 33, no. 4, pp. 70–75, 2019.
  2. B. Mao, F. Tang, Y. Kawamoto, and N. Kato, “Optimizing computation offloading in satellite-UAV-served 6G IoT: a deep learning approach,” IEEE Network, vol. 35, no. 4, pp. 102–108, 2021.
  3. H. Li, K. Ota, and M. Dong, “Learning IoT in edge: Deep learning for the Internet of Things with edge computing,” IEEE network, vol. 32, no. 1, pp. 96–101, 2018.
  4. X. You, C.-X. Wang, J. Huang, X. Gao, Z. Zhang, M. Wang, Y. Huang, C. Zhang, Y. Jiang, J. Wang et al., “Towards 6G wireless communication networks: Vision, enabling technologies, and new paradigm shifts,” Science China Information Sciences, vol. 64, no. 1, pp. 1–74, 2021.
  5. G. Araniti, A. Iera, S. Pizzi, and F. Rinaldi, “Toward 6G non-terrestrial networks,” IEEE Network, 2021.
  6. W. Saad, M. Bennis, and M. Chen, “A vision of 6G wireless systems: Applications, trends, technologies, and open research problems,” IEEE network, vol. 34, no. 3, pp. 134–142, 2019.
  7. H. Yang, A. Alphones, Z. Xiong, D. Niyato, J. Zhao, and K. Wu, “Artificial-intelligence-enabled intelligent 6G networks,” IEEE Network, vol. 34, no. 6, pp. 272–280, 2020.
  8. X. Liu, Y. Liu, Y. Chen, and L. Hanzo, “Trajectory design and power control for multi-UAV assisted wireless networks: A machine learning approach,” IEEE Transactions on Vehicular Technology, vol. 68, no. 8, pp. 7957–7969, 2019.
  9. H. Bayerlein, M. Theile, M. Caccamo, and D. Gesbert, “Multi-UAV path planning for wireless data harvesting with deep reinforcement learning,” IEEE Open Journal of the Communications Society, vol. 2, pp. 1171–1187, 2021.
  10. T. Li, W. Liu, Z. Zeng, and N. Xiong, “DRLR: A deep reinforcement learning based recruitment scheme for massive data collections in 6G-based IoT networks,” IEEE Internet of Things journal, 2021.
  11. Y.-J. Chen, D.-K. Chang, and C. Zhang, “Autonomous tracking using a swarm of UAVs: A constrained multi-agent reinforcement learning approach,” IEEE Transactions on Vehicular Technology, vol. 69, no. 11, pp. 13 702–13 717, 2020.
  12. Y. Hu, M. Chen, W. Saad, H. V. Poor, and S. Cui, “Distributed multi-agent meta learning for trajectory design in wireless drone networks,” IEEE Journal on Selected Areas in Communications, vol. 39, no. 10, pp. 3177–3192, 2021.
  13. R. Zhong, X. Liu, Y. Liu, and Y. Chen, “Multi-agent reinforcement learning in NOMA-aided uav networks for cellular offloading,” IEEE Transactions on Wireless Communications, 2021.
  14. Z. Wang, T. Schaul, M. Hessel, H. Hasselt, M. Lanctot, and N. Freitas, “Dueling network architectures for deep reinforcement learning,” in International conference on machine learning.   PMLR, 2016, pp. 1995–2003.
  15. J. v. d. Berg, S. J. Guy, M. Lin, and D. Manocha, “Reciprocal n-body collision avoidance,” in Robotics research.   Springer, 2011, pp. 3–19.

Summary

We haven't generated a summary for this paper yet.