Collaborative Perception for Connected and Autonomous Driving: Challenges, Possible Solutions and Opportunities (2401.01544v1)
Abstract: Autonomous driving has attracted significant attention from both academia and industries, which is expected to offer a safer and more efficient driving system. However, current autonomous driving systems are mostly based on a single vehicle, which has significant limitations which still poses threats to driving safety. Collaborative perception with connected and autonomous vehicles (CAVs) shows a promising solution to overcoming these limitations. In this article, we first identify the challenges of collaborative perception, such as data sharing asynchrony, data volume, and pose errors. Then, we discuss the possible solutions to address these challenges with various technologies, where the research opportunities are also elaborated. Furthermore, we propose a scheme to deal with communication efficiency and latency problems, which is a channel-aware collaborative perception framework to dynamically adjust the communication graph and minimize latency, thereby improving perception performance while increasing communication efficiency. Finally, we conduct experiments to demonstrate the effectiveness of our proposed scheme.
- X. Chen, Y. Deng, H. Ding, G. Qu, H. Zhang, P. Li, and Y. Fang, “Vehicle as a Service (VaaS): Leverage Vehicles to Build Service Networks and Capabilities for Smart Cities,” Apr. 2023, arXiv:2304.11397 [cs].
- R. Xu, H. Xiang, X. Xia, X. Han, J. Li, and J. Ma, “OPV2V: An Open Benchmark Dataset and Fusion Pipeline for Perception with Vehicle-to-Vehicle Communication,” Jun. 2022, arXiv:2109.07644 [cs].
- Y. Li, D. Ma, Z. An, Z. Wang, Y. Zhong, S. Chen, and C. Feng, “V2X-Sim: Multi-Agent Collaborative Perception Dataset and Benchmark for Autonomous Driving,” Jul. 2022, arXiv:2202.08449 [cs].
- A. Geiger, P. Lenz, and R. Urtasun, “Are we ready for autonomous driving? The KITTI vision benchmark suite,” in 2012 IEEE Conference on Computer Vision and Pattern Recognition, Jun. 2012, pp. 3354–3361, iSSN: 1063-6919.
- S. Hu, Z. Fang, H. An, G. Xu, Y. Zhou, X. Chen, and Y. Fang, “Adaptive Communications in Collaborative Perception with Domain Alignment for Autonomous Driving,” Oct. 2023, arXiv:2310.00013 [cs].
- T.-H. Wang, S. Manivasagam, M. Liang, B. Yang, W. Zeng, J. Tu, and R. Urtasun, “V2VNet: Vehicle-to-Vehicle Communication for Joint Perception and Prediction,” Aug. 2020, arXiv:2008.07519 [cs].
- Y. Hu, S. Fang, Z. Lei, Y. Zhong, and S. Chen, “Where2comm: Communication-Efficient Collaborative Perception via Spatial Confidence Maps,” Sep. 2022, arXiv:2209.12836 [cs] .
- Z. Lei, S. Ren, Y. Hu, W. Zhang, and S. Chen, “Latency-Aware Collaborative Perception,” in ECCV. arXiv, Jul. 2022, arXiv:2207.08560 [cs].
- R. Xu, H. Xiang, Z. Tu, X. Xia, M.-H. Yang, and J. Ma, “V2X-ViT: Vehicle-to-Everything Cooperative Perception with Vision Transformer,” Aug. 2022, arXiv:2203.10638 [cs].
- Y. Lu, Q. Li, B. Liu, M. Dianati, C. Feng, S. Chen, and Y. Wang, “Robust Collaborative 3D Object Detection in Presence of Pose Errors,” Mar. 2023, arXiv:2211.07214 [cs].
- Y. Zhao, Z. Xiang, S. Yin, X. Pang, S. Chen, and Y. Wang, “Malicious Agent Detection for Robust Multi-Agent Collaborative Perception,” Oct. 2023, arXiv:2310.11901 [cs].
- Z. Fang, S. Hu, H. An, Y. Zhang, J. Wang, H. Cao, X. Chen, and Y. Fang, “PACS: Priority-Aware Collaborative Sensing for Connected and Autonomous Vehicles.” [Online]. Available: https://shorturl.at/pNZ13
- S. Hu, Z. Fang, X. Chen, Y. Fang, and S. Kwong, “Towards Full-scene Domain Generalization in Multi-agent Collaborative Bird’s Eye View Segmentation for Connected and Autonomous Driving,” Nov. 2023, arXiv:2311.16754 [cs].
- Y. Li, S. Ren, P. Wu, S. Chen, C. Feng, and W. Zhang, “Learning Distilled Collaboration Graph for Multi-Agent Perception,” in Advances in Neural Information Processing Systems, vol. 34. Curran Associates, Inc., 2021, pp. 29 541–29 552.
- R. Xu, Z. Tu, H. Xiang, W. Shao, B. Zhou, and J. Ma, “CoBEVT: Cooperative Bird’s Eye View Semantic Segmentation with Sparse Transformers,” Sep. 2022, arXiv:2207.02202 [cs].
- Senkang Hu (18 papers)
- Zhengru Fang (20 papers)
- Yiqin Deng (23 papers)
- Xianhao Chen (51 papers)
- Yuguang Fang (56 papers)