Perceptual Factors for Environmental Modeling in Robotic Active Perception (2309.10620v2)
Abstract: Accurately assessing the potential value of new sensor observations is a critical aspect of planning for active perception. This task is particularly challenging when reasoning about high-level scene understanding using measurements from vision-based neural networks. Due to appearance-based reasoning, the measurements are susceptible to several environmental effects such as the presence of occluders, variations in lighting conditions, and redundancy of information due to similarity in appearance between nearby viewpoints. To address this, we propose a new active perception framework incorporating an arbitrary number of perceptual effects in planning and fusion. Our method models the correlation with the environment by a set of general functions termed perceptual factors to construct a perceptual map, which quantifies the aggregated influence of the environment on candidate viewpoints. This information is seamlessly incorporated into the planning and fusion processes by adjusting the uncertainty associated with measurements to weigh their contributions. We evaluate our perceptual maps in a simulated environment that reproduces environmental conditions common in robotics applications. Our results show that, by accounting for environmental effects within our perceptual maps, we improve in the state estimation by correctly selecting the viewpoints and considering the measurement noise correctly when affected by environmental factors. We furthermore deploy our approach on a ground robot to showcase its applicability for real-world active perception missions.
- R. Bajcsy, “Active perception,” Proceedings of the IEEE, vol. 76, no. 8, pp. 966–1005, 1988.
- S. Papatheodorou, N. Funk, D. Tzoumanikas, C. Choi, B. Xu, and S. Leutenegger, “Finding things in the unknown: Semantic object-centric exploration with an MAV,” in Proc. of the IEEE Intl. Conf. on Robotics & Automation, 2023.
- X. Liu, A. Prabhu, F. Cladera, I. D. Miller, L. Zhou, C. J. Taylor, and V. Kumar, “Active metric-semantic mapping by multiple aerial robots,” in Proc. of the IEEE Intl. Conf. on Robotics & Automation, 2023.
- M. Dharmadhikari and K. Alexis, “Semantics-aware exploration and inspection path planning,” in Proc. of the IEEE Intl. Conf. on Robotics & Automation, 2023.
- T. Dang, C. Papachristos, and K. Alexis, “Autonomous exploration and simultaneous object search using aerial robots,” in IEEE Aerospace Conference, 2018.
- J. Velez, G. Hemann, A. S. Huang, I. Posner, and N. Roy, “Modelling observation correlations for active exploration and robust object detection,” Journal of Artificial Intelligence Research (JAIR), vol. 44, pp. 423–453, 2012.
- V. Tchuiev and V. Indelman, “Inference over distribution of posterior class probabilities for reliable bayesian classification and object-level perception,” IEEE Robotics and Automation Letters, vol. 3, no. 4, pp. 4329–4336, 2018.
- Y. Kantaros, B. Schlotfeldt, N. Atanasov, and G. J. Pappas, “Sampling-based planning for non-myopic multi-robot information gathering,” Autonomous Robots, vol. 45, no. 7, pp. 1029–1046, 2021.
- G. Best, O. M. Cliff, T. Patten, R. R. Mettu, and R. Fitch, “Dec-mcts: Decentralized planning for multi-robot active perception,” Intl. Journal of Robotics Research, vol. 38, no. 2-3, pp. 316–337, 2019.
- Z.-F. Xu, R.-S. Jia, Y.-B. Liu, C.-Y. Zhao, and H.-M. Sun, “Fast method of detecting tomatoes in a complex scene for picking robots,” IEEE Access, vol. 8, pp. 55 289–55 299, 2020.
- J. Delmerico, S. Isler, R. Sabzevari, and D. Scaramuzza, “A comparison of volumetric information gain metrics for active 3d object reconstruction,” Autonomous Robots, vol. 42, no. 2, pp. 197–208, 2018.
- S. Song, D. Kim, and S. Choi, “View path planning via online multiview stereo for 3-d modeling of large-scale structures,” IEEE Trans. on Robotics, vol. 38, no. 1, pp. 372–390, 2021.
- L. Schmid, M. Pantic, R. Khanna, L. Ott, R. Siegwart, and J. Nieto, “An efficient sampling-based method for online informative path planning in unknown environments,” IEEE Robotics and Automation Letters, vol. 5, no. 2, pp. 1500–1507, 2020.
- B. Hepp, M. Nießner, and O. Hilliges, “Plan3d: Viewpoint and trajectory optimization for aerial multi-view stereo reconstruction,” ACM Transactions on Graphics (TOG), vol. 38, no. 1, pp. 1–17, 2018.
- D. Morilla-Cabello, L. Bartolomei, L. Teixeira, E. Montijano, and M. Chli, “Sweep-your-map: Efficient coverage planning for aerial teams in large-scale environments,” IEEE Robotics and Automation Letters, vol. 7, no. 4, pp. 10 810–10 817, 2022.
- M. Popović, T. Vidal-Calleja, G. Hitz, J. J. Chung, I. Sa, R. Siegwart, and J. Nieto, “An informative path planning framework for uav-based terrain monitoring,” Autonomous Robots, vol. 44, pp. 889–911, 2020.
- A. Asgharivaskasi and N. Atanasov, “Semantic octree mapping and shannon mutual information computation for robot exploration,” IEEE Trans. on Robotics, vol. 39, no. 3, pp. 1910–1928, 2023.
- L. Qingqing, J. Taipalmaa, J. P. Queralta, T. N. Gia, M. Gabbouj, H. Tenhunen, J. Raitoharju, and T. Westerlund, “Towards Active Vision with UAVs in Marine Search and Rescue: Analyzing Human Detection at Variable Altitudes,” in Proc. of the IEEE Intl. Symposium on Safety, Security, and Rescue Robotics, 2020.
- A. A. Meera, M. Popović, A. Millane, and R. Siegwart, “Obstacle-aware Adaptive Informative Path Planning for UAV-based Target Search,” in Proc. of the IEEE Intl. Conf. on Robotics & Automation, 2019.
- W. L. Teacy, S. Julier, R. De Nardi, A. Rogers, and N. R. Jennings, “Observation modelling for vision-based target search by unmanned aerial vehicles,” in Proc. Int. Conf. Auton. Agents Multiagent Syst., 2015.
- V. Tchuiev and V. Indelman, “Epistemic uncertainty aware semantic localization and mapping for inference and belief space planning,” Artificial Intelligence, vol. 319, p. 103903, 2023.
- Y. Feldman and V. Indelman, “Bayesian viewpoint-dependent robust classification under model and localization uncertainty,” in Proc. of the IEEE Intl. Conf. on Robotics & Automation, 2018.
- A. Kuznetsova, T. Maleva, and V. Soloviev, “Using YOLOv3 algorithm with pre-and post-processing for apple detection in fruit-harvesting robot,” Agronomy, vol. 10, no. 7, p. 1016, 2020.
- R. Cheng, A. Agarwal, and K. Fragkiadaki, “Reinforcement learning of active vision for manipulating objects under occlusions,” in Proc. of the Conf. on Robot Learning (CoRL), 2018.
- K. Schlegel, P. Weissig, and P. Protzel, “A blind-spot-aware optimization-based planner for safe robot navigation,” in Proc. of the Europ. Conf. on Mobile Robotics (ECMR), 2021.
- B. Gilhuly, A. Sadeghi, P. Yedmellat, K. Rezaee, and S. L. Smith, “Looking for trouble: Informative planning for safe trajectories with occlusions,” in Proc. of the IEEE Intl. Conf. on Robotics & Automation, 2022.
- D. Morrison, P. Corke, and J. Leitner, “Multi-view picking: Next-best-view reaching for improved grasping in clutter,” in 2019 International Conference on Robotics and Automation (ICRA), 2019, pp. 8762–8768.
- R. Menon, T. Zaenker, N. Dengler, and M. Bennewitz, “Nbv-sc: Next best view planning based on shape completion based on fruit mapping and reconstruction,” arXiv preprint arXiv:2209.15376, 2023.
- D. Morilla-Cabello, L. Mur-Labadia, R. Martinez-Cantin, and E. Montijano, “Robust Fusion for Bayesian Semantic Mapping,” in Proc. of the IEEE/RSJ Intl. Conf. on Intelligent Robots and Systems, 2023.
- A. Serra-Gómez, E. Montijano, W. Böhmer, and J. Alonso-Mora, “Active classification of moving targets with learned control policies,” IEEE Robotics and Automation Letters, vol. 8, no. 6, pp. 3717–3724, 2023.
- G. Jocher, “Yolov5 by ultralytics,” 2020. [Online]. Available: https://github.com/ultralytics/yolov5