Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

A Twin Delayed Deep Deterministic Policy Gradient Algorithm for Autonomous Ground Vehicle Navigation via Digital Twin Perception Awareness (2403.15067v1)

Published 22 Mar 2024 in cs.RO, cs.SY, and eess.SY

Abstract: Autonomous ground vehicle (UGV) navigation has the potential to revolutionize the transportation system by increasing accessibility to disabled people, ensure safety and convenience of use. However, UGV requires extensive and efficient testing and evaluation to ensure its acceptance for public use. This testing are mostly done in a simulator which result to sim2real transfer gap. In this paper, we propose a digital twin perception awareness approach for the control of robot navigation without prior creation of the virtual environment (VT) environment state. To achieve this, we develop a twin delayed deep deterministic policy gradient (TD3) algorithm that ensures collision avoidance and goal-based path planning. We demonstrate the performance of our approach on different environment dynamics. We show that our approach is capable of efficiently avoiding collision with obstacles and navigating to its desired destination, while at the same time safely avoids obstacles using the information received from the LIDAR sensor mounted on the robot. Our approach bridges the gap between sim-to-real transfer and contributes to the adoption of UGVs in real world. We validate our approach in simulation and a real-world application in an office space.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (41)
  1. X. Hu, S. Li, T. Huang, B. Tang, R. Huai, and L. Chen, “How simulation helps autonomous driving: A survey of sim2real, digital twins, and parallel intelligence,” IEEE Transactions on Intelligent Vehicles, 2023.
  2. K. L. Voogd, J. P. Allamaa, J. Alonso-Mora, and T. D. Son, “Reinforcement learning from simulation to real world autonomous driving using digital twin,” IFAC-PapersOnLine, vol. 56, no. 2, pp. 1510–1515, 2023.
  3. P. Kaur, S. Taghavi, Z. Tian, and W. Shi, “A survey on simulators for testing self-driving cars,” in 2021 Fourth International Conference on Connected and Autonomous Driving (MetroCAD), 2021, pp. 62–70.
  4. A. I. Hentati, L. Krichen, M. Fourati, and L. C. Fourati, “Simulation tools, environments and frameworks for uav systems performance analysis,” in 2018 14th International Wireless Communications & Mobile Computing Conference (IWCMC), 2018, pp. 1495–1500.
  5. W. Zhao, J. P. Queralta, and T. Westerlund, “Sim-to-real transfer in deep reinforcement learning for robotics: a survey,” in 2020 IEEE Symposium Series on Computational Intelligence (SSCI), 2020, pp. 737–744.
  6. K. Kang, S. Belkhale, G. Kahn, P. Abbeel, and S. Levine, “Generalization through simulation: Integrating simulated and real data into deep reinforcement learning for vision-based autonomous flight,” in 2019 international conference on robotics and automation (ICRA).   IEEE, 2019, pp. 6008–6014.
  7. N. Liu, Y. Cai, T. Lu, R. Wang, and S. Wang, “Real–sim–real transfer for real-world robot control policy learning with deep reinforcement learning,” Applied Sciences, vol. 10, no. 5, p. 1555, 2020.
  8. H. Ju, R. Juan, R. Gomez, K. Nakamura, and G. Li, “Transferring policy of deep reinforcement learning from simulation to reality for robotics,” Nature Machine Intelligence, vol. 4, no. 12, pp. 1077–1087, 2022.
  9. V. Dygalo, A. Keller, and A. Shcherbin, “Principles of application of virtual and physical simulation technology in production of digital twin of active vehicle safety systems,” Transportation research procedia, vol. 50, pp. 121–129, 2020.
  10. J. Wu, Z. Huang, P. Hang, C. Huang, N. De Boer, and C. Lv, “Digital twin-enabled reinforcement learning for end-to-end autonomous driving,” in 2021 IEEE 1st International Conference on Digital Twins and Parallel Intelligence (DTPI).   IEEE, 2021, pp. 62–65.
  11. B. Yu, C. Chen, J. Tang, S. Liu, and J.-L. Gaudiot, “Autonomous vehicles digital twin: A practical paradigm for autonomous driving system development,” Computer, vol. 55, no. 9, pp. 26–34, 2022.
  12. A. S. Mohammed, A. Amamou, F. K. Ayevide, S. Kelouwani, K. Agbossou, and N. Zioui, “The perception system of intelligent ground vehicles in all weather conditions: A systematic literature review,” Sensors, vol. 20, no. 22, p. 6532, 2020.
  13. D. C. Guastella and G. Muscato, “Learning-based methods of perception and navigation for ground vehicles in unstructured environments: A review,” Sensors, vol. 21, no. 1, p. 73, 2020.
  14. M. Schwenzer, M. Ay, T. Bergs, and D. Abel, “Review on model predictive control: An engineering perspective,” The International Journal of Advanced Manufacturing Technology, vol. 117, no. 5-6, pp. 1327–1349, 2021.
  15. P. Hang, S. Huang, X. Chen, and K. K. Tan, “Path planning of collision avoidance for unmanned ground vehicles: A nonlinear model predictive control approach,” Proceedings of the Institution of Mechanical Engineers, Part I: Journal of Systems and Control Engineering, vol. 235, no. 2, pp. 222–236, 2021.
  16. W. Xue and L. Zheng, “Active collision avoidance system design based on model predictive control with varying sampling time,” Automotive Innovation, vol. 3, no. 1, pp. 62–72, 2020.
  17. C. Hu, L. Zhao, L. Cao, P. Tjan, and N. Wang, “Steering control based on model predictive control for obstacle avoidance of unmanned ground vehicle,” Measurement and control, vol. 53, no. 3-4, pp. 501–518, 2020.
  18. B. Kouvaritakis and M. Cannon, “Stochastic model predictive control,” in Encyclopedia of systems and control.   Springer, 2021, pp. 2190–2196.
  19. H. Yu, C. Hirayama, C. Yu, S. Herbert, and S. Gao, “Sequential neural barriers for scalable dynamic obstacle avoidance,” in 2023 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).   IEEE, 2023, pp. 11 241–11 248.
  20. A. Singletary, K. Klingebiel, J. Bourne, A. Browning, P. Tokumaru, and A. Ames, “Comparative analysis of control barrier functions and artificial potential fields for obstacle avoidance,” in 2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).   IEEE, 2021, pp. 8129–8136.
  21. S. Zhang, K. Garg, and C. Fan, “Neural graph control barrier functions guided distributed collision-avoidance multi-agent control,” in Conference on Robot Learning.   PMLR, 2023, pp. 2373–2392.
  22. P. Thontepu, B. G. Goswami, N. Singh, S. PI, S. Sundaram, V. Katewa, S. Kolathaya et al., “Control barrier functions in ugvs for kinematic obstacle avoidance: A collision cone approach,” arXiv preprint arXiv:2209.11524, 2022.
  23. N. Mohanty, M. S. Gadde, S. Sundaram, N. Sundararajan, and P. Sujit, “Context-aware deep q-network for decentralized cooperative reconnaissance by a robotic swarm,” arXiv preprint arXiv:2001.11710, 2020.
  24. P. Zhai, Y. Zhang, and W. Shaobo, “Intelligent ship collision avoidance algorithm based on ddqn with prioritized experience replay under colregs,” Journal of Marine Science and Engineering, vol. 10, no. 5, p. 585, 2022.
  25. Y. Chen, W. Han, Q. Zhu, Y. Liu, and J. Zhao, “Target-driven obstacle avoidance algorithm based on ddpg for connected autonomous vehicles,” EURASIP Journal on Advances in Signal Processing, vol. 2022, no. 1, pp. 1–22, 2022.
  26. T. Haarnoja, A. Zhou, P. Abbeel, and S. Levine, “Soft actor-critic: Off-policy maximum entropy deep reinforcement learning with a stochastic actor,” in International conference on machine learning.   PMLR, 2018, pp. 1861–1870.
  27. Y. Yang, C. Xie, Z. Hou, and H. Chen, “Automatic collision avoidance via deep reinforcement learning for mobile robot,” in 2022 IEEE International Conference on Unmanned Systems (ICUS).   IEEE, 2022, pp. 572–577.
  28. A. Dosovitskiy, G. Ros, F. Codevilla, A. Lopez, and V. Koltun, “Carla: An open urban driving simulator,” in Conference on robot learning.   PMLR, 2017, pp. 1–16.
  29. N. Jiang and R. Qiu, “Modelling and simulation of vehicle esp system based on carsim and simulink,” in Journal of Physics: Conference Series, vol. 2170, no. 1.   IOP Publishing, 2022, p. 012032.
  30. J.-E. Deschaud, “Kitti-carla: a kitti-like dataset generated by carla simulator,” arXiv preprint arXiv:2109.00892, 2021.
  31. S. Shah, D. Dey, C. Lovett, and A. Kapoor, “Airsim: High-fidelity visual and physical simulation for autonomous vehicles,” in Field and Service Robotics: Results of the 11th International Conference.   Springer, 2018, pp. 621–635.
  32. G. Rong, B. H. Shin, H. Tabatabaee, Q. Lu, S. Lemke, M. Možeiko, E. Boise, G. Uhm, M. Gerow, S. Mehta et al., “Lgsvl simulator: A high fidelity simulator for autonomous driving,” in 2020 IEEE 23rd International conference on intelligent transportation systems (ITSC).   IEEE, 2020, pp. 1–6.
  33. A. C. Madrigal, “Inside waymo’s secret world for training self-driving cars,” The Atlantic, vol. 23, pp. 3–1, 2017.
  34. V. Costa, R. J. Rossetti, and A. Sousa, “Autonomous driving simulator for educational purposes,” in 2016 11th Iberian Conference on Information Systems and Technologies (CISTI), 2016, pp. 1–5.
  35. E. Mingo Hoffman, S. Traversaro, A. Rocchi, M. Ferrati, A. Settimi, F. Romano, L. Natale, A. Bicchi, F. Nori, and N. G. Tsagarakis, “Yarp based plugins for gazebo simulator,” in Modelling and Simulation for Autonomous Systems: First International Workshop, MESAS 2014, Rome, Italy, May 5-6, 2014, Revised Selected Papers 1.   Springer, 2014, pp. 333–346.
  36. F. Furrer, M. Burri, M. Achtelik, and R. Siegwart, “Rotors—a modular gazebo mav simulator framework,” Robot Operating System (ROS) The Complete Reference (Volume 1), pp. 595–625, 2016.
  37. D. Chikurtev, “Mobile robot simulation and navigation in ros and gazebo,” in 2020 International Conference Automatics and Informatics (ICAI), 2020, pp. 1–6.
  38. S. Moon, J. J. Bird, S. Borenstein, and E. W. Frew, “A gazebo/ros-based communication-realistic simulator for networked suas,” in 2020 International Conference on Unmanned Aircraft Systems (ICUAS), 2020, pp. 1819–1827.
  39. D. Liu, Y. Chen, and Z. Wu, “Digital twin (dt)-cyclegan: Enabling zero-shot sim-to-real transfer of visual grasping models,” IEEE Robotics and Automation Letters, vol. 8, no. 5, pp. 2421–2428, 2023.
  40. W. Hu, T. Zhang, X. Deng, Z. Liu, and J. Tan, “Digital twin: A state-of-the-art review of its enabling technologies, applications and challenges,” Journal of Intelligent Manufacturing and Special Equipment, vol. 2, no. 1, pp. 1–34, 2021.
  41. T. Ribeiro, F. Gonçalves, I. Garcia, G. Lopes, and A. F. Ribeiro, “Q-learning for autonomous mobile robot obstacle avoidance,” in 2019 IEEE International Conference on Autonomous Robot Systems and Competitions (ICARSC), 2019, pp. 1–7.
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Kabirat Olayemi (3 papers)
  2. Mien Van (15 papers)
  3. Yuzhu Sun (13 papers)
  4. Jack Close (3 papers)
  5. Nguyen Minh Nhat (3 papers)
  6. Stephen McIlvanna (8 papers)
  7. Sean McLoone (8 papers)
Citations (1)

Summary

We haven't generated a summary for this paper yet.