Erasing, Transforming, and Noising Defense Network for Occluded Person Re-Identification (2307.07187v3)
Abstract: Occlusion perturbation presents a significant challenge in person re-identification (re-ID), and existing methods that rely on external visual cues require additional computational resources and only consider the issue of missing information caused by occlusion. In this paper, we propose a simple yet effective framework, termed Erasing, Transforming, and Noising Defense Network (ETNDNet), which treats occlusion as a noise disturbance and solves occluded person re-ID from the perspective of adversarial defense. In the proposed ETNDNet, we introduce three strategies: Firstly, we randomly erase the feature map to create an adversarial representation with incomplete information, enabling adversarial learning of identity loss to protect the re-ID system from the disturbance of missing information. Secondly, we introduce random transformations to simulate the position misalignment caused by occlusion, training the extractor and classifier adversarially to learn robust representations immune to misaligned information. Thirdly, we perturb the feature map with random values to address noisy information introduced by obstacles and non-target pedestrians, and employ adversarial gaming in the re-ID system to enhance its resistance to occlusion noise. Without bells and whistles, ETNDNet has three key highlights: (i) it does not require any external modules with parameters, (ii) it effectively handles various issues caused by occlusion from obstacles and non-target pedestrians, and (iii) it designs the first GAN-based adversarial defense paradigm for occluded person re-ID. Extensive experiments on five public datasets fully demonstrate the effectiveness, superiority, and practicality of the proposed ETNDNet. The code will be released at \url{https://github.com/nengdong96/ETNDNet}.
- H. B. Shitrit, J. Berclaz, F. Fleuret, and P. Fua, “Multi-commodity network flow for tracking multiple people,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 36, no. 8, pp. 1614–1627, 2014, doi: 10.1109/TPAMI.2013.210.
- Y. Li, L. Ma, Z. Zhong, F. Liu, M. A. Chapman, D. Cao, and J. Li, “Deep learning for lidar point clouds in autonomous driving: A review,” IEEE Transactions on Neural Networks and Learning Systems, vol. 32, no. 8, pp. 3412–3432, 2021, doi: 10.1109/TNNLS.2020.3015992.
- J. Tang, X. Shu, R. Yan, and L. Zhang, “Coherence constrained graph lstm for group activity recognition,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 44, no. 2, pp. 636–647, 2022, doi: 10.1109/TPAMI.2019.2928540.
- H. Luo, Y. Gu, X. Liao, S. Lai, and W. Jiang, “Bag of tricks and a strong baseline for deep person re-identification,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), 2019, pp. 1487–1495, doi: 10.1109/CVPRW.2019.00190.
- M. Ye, J. Shen, G. Lin, T. Xiang, L. Shao, and S. C. H. Hoi, “Deep learning for person re-identification: A survey and outlook,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 44, no. 6, pp. 2872–2893, 2022, doi: 10.1109/TPAMI.2021.3054775.
- X. Shu, X. Wang, X. Zang, S. Zhang, Y. Chen, G. Li, and Q. Tian, “Large-scale spatio-temporal person re-identification: Algorithms and benchmark,” IEEE Transactions on Circuits and Systems for Video Technology, vol. 32, no. 7, pp. 4390–4403, 2022, doi: 10.1109/TCSVT.2021.3128214.
- H. Li, Y. Chen, D. Tao, Z. Yu, and G. Qi, “Attribute-aligned domain-invariant feature learning for unsupervised domain adaptation person re-identification,” IEEE Transactions on Information Forensics and Security, vol. 16, pp. 1480–1494, 2021, doi: 10.1109/TIFS.2020.3036800.
- Y. Wang, G. Qi, S. Li, Y. Chai, and H. Li, “Body part-level domain alignment for domain-adaptive person re-identification with transformer framework,” IEEE Transactions on Information Forensics and Security, vol. 17, pp. 3321–3334, 2022, doi: 10.1109/TIFS.2022.3207893.
- Y. Peng, S. Hou, C. Cao, X. Liu, Y. Huang, and Z. He, “Deep learning-based occluded person re-identification: A survey,” arXiv preprint arXiv:2207.14452, 2022.
- S. Gao, J. Wang, H. Lu, and Z. Liu, “Pose-guided visible part matching for occluded person reid,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2020, pp. 11 744–11 752, doi: 10.1109/CVPR42600.2020.01176.
- G. Wang, S. Yang, H. Liu, Z. Wang, Y. Yang, S. Wang, G. Yu, E. Zhou, and J. Sun, “High-order information matters: Learning relation and topology for occluded person re-identification,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2020, pp. 6449–6458, doi: 10.1109/CVPR42600.2020.00648.
- R. Hou, B. Ma, H. Chang, X. Gu, S. Shan, and X. Chen, “Feature completion for occluded person re-identification,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 44, no. 9, pp. 4894–4912, 2022, doi: 10.1109/TPAMI.2021.3079910.
- Z. Cao, T. Simon, S.-E. Wei, and Y. Sheikh, “Realtime multi-person 2d pose estimation using part affinity fields,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2017, pp. 7291–7299, doi: 10.1109/CVPR.2017.143.
- X. Liang, K. Gong, X. Shen, and L. Lin, “Look into person: Joint body parsing & pose estimation network and a new benchmark,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 41, no. 4, pp. 871–885, 2018, doi: 10.1109/TPAMI.2018.2820063.
- I. J. Goodfellow, J. Shlens, and C. Szegedy, “Explaining and harnessing adversarial examples,” arXiv preprint arXiv:1412.6572, 2014.
- S. Zheng, Y. Song, T. Leung, and I. Goodfellow, “Improving the robustness of deep neural networks via stability training,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016, pp. 4480–4488, doi: 10.1109/CVPR.2016.485.
- C. Lyu, K. Huang, and H.-N. Liang, “A unified gradient regularization family for adversarial examples,” in Proceedings of the IEEE International Conference on Data Mining, 2015, pp. 301–309, doi: 10.1109/ICDM.2015.84.
- H. Huang, D. Li, Z. Zhang, X. Chen, and K. Huang, “Adversarially occluded samples for person re-identification,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2018, pp. 5098–5107, doi: 10.1109/CVPR.2018.00535.
- C. Zhao, X. Lv, S. Dou, S. Zhang, J. Wu, and L. Wang, “Incremental generative occlusion adversarial suppression network for person reid,” IEEE Transactions on Image Processing, vol. 30, pp. 4212–4224, 2021, doi: 10.1109/TIP.2021.3070182.
- A. Ilyas, S. Santurkar, D. Tsipras, L. Engstrom, B. Tran, and A. Madry, “Adversarial examples are not bugs, they are features,” Advances in Neural Information Processing Systems, vol. 32, 2019.
- I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio, “Generative adversarial networks,” Communications of the ACM, vol. 63, no. 11, pp. 139–144, 2020, doi: 10.1145/3422622.
- H. Lee, S. Han, and J. Lee, “Generative adversarial trainer: Defense to adversarial perturbations with gan,” arXiv preprint arXiv:1705.03387, 2017.
- M. Farenzena, L. Bazzani, A. Perina, V. Murino, and M. Cristani, “Person re-identification by symmetry-driven accumulation of local features,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2010, pp. 2360–2367, doi: 10.1109/CVPR.2010.5539926.
- S. Liao, Y. Hu, X. Zhu, and S. Z. Li, “Person re-identification by local maximal occurrence representation and metric learning,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2015, pp. 2197–2206, doi: 10.1109/CVPR.2015.7298832.
- H. Li, S. Yan, Z. Yu, and D. Tao, “Attribute-identity embedding and self-supervised learning for scalable person re-identification,” IEEE Transactions on Circuits and Systems for Video Technology, vol. 30, no. 10, pp. 3472–3485, 2019, doi: 10.1109/TCSVT.2019.2952550.
- D. Yi, Z. Lei, S. Liao, and S. Z. Li, “Deep metric learning for person re-identification,” in Proceedings of the IEEE International Conference on Pattern Recognition (ICPR), 2014, pp. 34–39, doi: 10.1109/ICPR.2014.16.
- Y. Sun, L. Zheng, Y. Yang, Q. Tian, and S. Wang, “Beyond part models: Person retrieval with refined part pooling (and a strong convolutional baseline),” in Proceedings of the European Conference on Computer Vision (ECCV), 2018, pp. 480–496, doi: 10.1007/978-3-030-01225-0_30.
- Z. Jin, J. Xie, B. Wu, and L. Shen, “Weakly supervised pedestrian segmentation for person re-identification,” IEEE Transactions on Circuits and Systems for Video Technology, vol. 33, no. 3, pp. 1349–1362, 2023, doi: 10.1109/TCSVT.2022.3210476.
- S. Li, F. Li, K. Wang, G. Qi, and H. Li, “Mutual prediction learning and mixed viewpoints for unsupervised-domain adaptation person re-identification on blockchain,” Simulation Modelling Practice and Theory, vol. 119, p. 102568, 2022, doi: 10.1016/j.simpat.2022.102568.
- Y. Wang, K. Xu, Y. Chai, Y. Jiang, and G. Qi, “Semantic consistent feature construction and multi-granularity feature learning for visible-infrared person re-identification,” The Visual Computer, pp. 1–17, 2023, doi: 10.1007/s00371-023-02923-w.
- H. Li, N. Dong, Z. Yu, D. Tao, and G. Qi, “Triple adversarial learning and multi-view imaginative reasoning for unsupervised domain adaptation person re-identification,” IEEE Transactions on Circuits and Systems for Video Technology, vol. 32, no. 5, pp. 2814–2830, 2022, doi: 10.1109/TCSVT.2021.3099943.
- J. Zhuo, J. Lai, and P. Chen, “A novel teacher-student learning framework for occluded person re-identification,” arXiv preprint arXiv:1907.03253, 2019.
- M. Kiran, R. G. Praveen, L. T. Nguyen-Meidine, S. Belharbi, L.-A. Blais-Morin, and E. Granger, “Holistic guidance for occluded person re-identification,” arXiv preprint arXiv:2104.06524, 2021.
- L. He, Y. Wang, W. Liu, H. Zhao, Z. Sun, and J. Feng, “Foreground-aware pyramid reconstruction for alignment-free occluded person re-identification,” in Proceedings of the IEEE International Conference on Computer Vision (ICCV), 2019, pp. 8450–8459, doi: 10.1109/ICCV.2019.00854.
- J. Miao, Y. Wu, P. Liu, Y. Ding, and Y. Yang, “Pose-guided feature alignment for occluded person re-identification,” in Proceedings of the IEEE International Conference on Computer Vision (ICCV), 2019, pp. 542–551, doi: 10.1109/ICCV.2019.00063.
- X. Zhang, Y. Yan, J.-H. Xue, Y. Hua, and H. Wang, “Semantic-aware occlusion-robust network for occluded person re-identification,” IEEE Transactions on Circuits and Systems for Video Technology, vol. 31, no. 7, pp. 2764–2778, 2020, doi: 10.1109/TCSVT.2020.3033165.
- K. Zheng, C. Lan, W. Zeng, J. Liu, Z. Zhang, and Z.-J. Zha, “Pose-guided feature learning with knowledge distillation for occluded person re-identification,” in Proceedings of the 29th ACM International Conference on Multimedia, 2021, pp. 4537–4545, doi: 10.1145/3474085.3475610.
- H. Tang, Z. Li, Z. Peng, and J. Tang, “Blockmix: meta regularization and self-calibrated inference for metric-based meta-learning,” in Proceedings of the 28th ACM International Conference on Multimedia, 2020, pp. 610–618, doi: 10.1145/3394171.3413884.
- H. Tang, C. Yuan, Z. Li, and J. Tang, “Learning attention-guided pyramidal features for few-shot fine-grained recognition,” Pattern Recognition, vol. 130, p. 108792, 2022, doi: 10.1016/j.patcog.2022.108792.
- Z. Li, H. Tang, Z. Peng, G.-J. Qi, and J. Tang, “Knowledge-guided semantic transfer network for few-shot image recognition,” IEEE Transactions on Neural Networks and Learning Systems, 2023, doi: 10.1109/TNNLS.2023.3240195.
- Z. Zha, H. Tang, Y. Sun, and J. Tang, “Boosting few-shot fine-grained recognition with background suppression and foreground alignment,” IEEE Transactions on Circuits and Systems for Video Technology, 2023, doi: 10.1109/TCSVT.2023.3236636.
- C. Szegedy, W. Zaremba, I. Sutskever, J. Bruna, D. Erhan, I. Goodfellow, and R. Fergus, “Intriguing properties of neural networks,” arXiv preprint arXiv:1312.6199, 2013.
- A. Kurakin, I. Goodfellow, and S. Bengio, “Adversarial machine learning at scale,” arXiv preprint arXiv:1611.01236, 2016.
- Z. Wang, S. Zheng, M. Song, Q. Wang, A. Rahimpour, and H. Qi, “advpattern: physical-world attacks on deep person re-identification via adversarially transformable patterns,” in Proceedings of the IEEE International Conference on Computer Vision (ICCV), 2019, pp. 8341–8350, doi: 10.1109/ICCV.2019.00843.
- H. Wang, G. Wang, Y. Li, D. Zhang, and L. Lin, “Transferable, controllable, and inconspicuous adversarial attacks on person re-identification with deep mis-ranking,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2020, pp. 342–351, doi: 10.1109/CVPR42600.2020.00042.
- J. Gao, B. Wang, Z. Lin, W. Xu, and Y. Qi, “Deepcloak: Masking deep neural network models for robustness against adversarial samples,” arXiv preprint arXiv:1702.06763, 2017.
- X. Wang, S. Li, M. Liu, Y. Wang, and A. K. Roy-Chowdhury, “Multi-expert adversarial attack detection in person re-identification using context inconsistency,” in Proceedings of the IEEE International Conference on Computer Vision (ICCV), 2021, pp. 15 097–15 107, doi: 10.1109/ICCV48922.2021.01482.
- K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016, pp. 770–778, doi: 10.1109/CVPR.2016.90.
- J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei, “Imagenet: A large-scale hierarchical image database,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2009, pp. 248–255, doi: 10.1109/CVPR.2009.5206848.
- C. Szegedy, V. Vanhoucke, S. Ioffe, J. Shlens, and Z. Wojna, “Rethinking the inception architecture for computer vision,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016, pp. 2818–2826, doi: 10.1109/CVPR.2016.308.
- Z. Zhong, L. Zheng, G. Kang, S. Li, and Y. Yang, “Random erasing data augmentation,” in Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, no. 07, 2020, pp. 13 001–13 008, doi: 10.1609/aaai.v34i07.7000.
- E. Ristani, F. Solera, R. Zou, R. Cucchiara, and C. Tomasi, “Performance measures and a data set for multi-target, multi-camera tracking,” in Proceedings of the European Conference on Computer Vision (ECCV). Springer, 2016, pp. 17–35, doi: 10.1007/978-3-319-48881-3_2.
- J. Zhuo, Z. Chen, J. Lai, and G. Wang, “Occluded person re-identification,” in IEEE International Conference on Multimedia and Expo (ICME), 2018, pp. 1–6, doi: 10.1109/ICME.2018.8486568.
- L. Zheng, L. Shen, L. Tian, S. Wang, J. Wang, and Q. Tian, “Scalable person re-identification: A benchmark,” in Proceedings of the IEEE International Conference on Computer Vision (ICCV), 2015, pp. 1116–1124, doi: 10.1109/ICCV.2015.133.
- L. Wei, S. Zhang, W. Gao, and Q. Tian, “Person transfer gan to bridge domain gap for person re-identification,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2018, pp. 79–88, doi: 10.1109/CVPR.2018.00016.
- L. He and W. Liu, “Guided saliency feature learning for person re-identification in crowded scenes,” in Proceedings of the European Conference on Computer Vision (ECCV). Springer, 2020, pp. 357–373, doi: 10.1007/978-3-030-58604-1_22.
- P. Chen, W. Liu, P. Dai, J. Liu, Q. Ye, M. Xu, Q. Chen, and R. Ji, “Occlude them all: Occlusion-aware attention network for occluded person re-id,” in 2021 IEEE/CVF International Conference on Computer Vision (ICCV), 2021, pp. 11 813–11 822, doi: 10.1109/ICCV48922.2021.01162.
- Z. Ma, Y. Zhao, and J. Li, “Pose-guided inter-and intra-part relational transformer for occluded person re-identification,” in Proceedings of the 29th ACM International Conference on Multimedia, 2021, pp. 1487–1496, doi: 10.1145/3474085.3475283.
- Y. Zhai, X. Han, W. Ma, X. Gou, and G. Xiao, “Pgmanet: Pose-guided mixed attention network for occluded person re-identification,” in 2021 International Joint Conference on Neural Networks (IJCNN). IEEE, 2021, pp. 1–8, doi: 10.1109/IJCNN52387.2021.9534442.
- J. Miao, Y. Wu, and Y. Yang, “Identifying visible parts via pose estimation for occluded person re-identification,” IEEE Transactions on Neural Networks and Learning Systems, 2021, doi: 10.1109/TNNLS.2021.3059515.
- K. Zhu, H. Guo, Z. Liu, M. Tang, and J. Wang, “Identity-guided human semantic parsing for person re-identification,” in Proceedings of the European Conference on Computer Vision (ECCV). Springer, 2020, pp. 346–363, doi: 10.1007/978-3-030-58580-8_21.
- H. Jin, S. Lai, and X. Qian, “Occlusion-sensitive person re-identification via attribute-based shift attention,” IEEE Transactions on Circuits and Systems for Video Technology, vol. 32, no. 4, pp. 2170–2185, 2021, doi: 10.1109/TCSVT.2021.3088446.
- H. Tan, X. Liu, Y. Bian, H. Wang, and B. Yin, “Incomplete descriptor mining with elastic loss for person re-identification,” IEEE Transactions on Circuits and Systems for Video Technology, vol. 32, no. 1, pp. 160–171, 2021, doi: 10.1109/TCSVT.2021.3061412.
- Z. Wang, F. Zhu, S. Tang, R. Zhao, L. He, and J. Song, “Feature erasing and diffusion network for occluded person re-identification,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022, pp. 4754–4763, doi: 10.1109/CVPR52688.2022.00471.
- P. Wang, C. Ding, Z. Shao, Z. Hong, S. Zhang, and D. Tao, “Quality-aware part models for occluded person re-identification,” IEEE Transactions on Multimedia, 2022, doi: 10.1109/TMM.2022.3156282.
- M. Jia, X. Cheng, S. Lu, and J. Zhang, “Learning disentangled representation implicitly via transformer for occluded person re-identification,” IEEE Transactions on Multimedia, 2022, doi: 10.1109/TMM.2022.3141267.
- S. Wang, R. Liu, H. Li, G. Qi, and Z. Yu, “Occluded person re-identification via defending against attacks from obstacles,” IEEE Transactions on Information Forensics and Security, vol. 18, pp. 147–161, 2022, doi: 10.1109/TIFS.2022.3218449.
- M. Huang, C. Hou, Q. Yang, and Z. Wang, “Reasoning and tuning: Graph attention network for occluded person re-identification,” IEEE Transactions on Image Processing, vol. 32, pp. 1568–1582, 2023, doi: 10.1109/TIP.2023.3247159.
- D. P. Kingma and J. Ba, “Adam: A method for stochastic optimization,” arXiv preprint arXiv:1412.6980, 2014.
- L. Zheng, H. Zhang, S. Sun, M. Chandraker, Y. Yang, and Q. Tian, “Person re-identification in the wild,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2017, pp. 1367–1376, doi: 10.1109/CVPR.2017.357.
- X. Wang, R. Girshick, A. Gupta, and K. He, “Non-local neural networks,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2018, pp. 7794–7803, doi: 10.1109/CVPR.2018.00813.
- Y. Sun, Q. Xu, Y. Li, C. Zhang, Y. Li, S. Wang, and J. Sun, “Perceive where to focus: Learning visibility-aware part-level features for partial person re-identification,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2019, pp. 393–402, doi: 10.1109/CVPR.2019.00048.
- R. Hou, B. Ma, H. Chang, X. Gu, S. Shan, and X. Chen, “Interaction-and-aggregation network for person re-identification,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2019, pp. 9317–9326, doi: 10.1109/CVPR.2019.00954.
- H. Tan, X. Liu, B. Yin, and X. Li, “Mhsa-net: Multihead self-attention network for occluded person re-identification,” IEEE Transactions on Neural Networks and Learning Systems, 2022, doi: 10.1109/TNNLS.2022.3144163.
- K. Zhou, Y. Yang, A. Cavallaro, and T. Xiang, “Omni-scale feature learning for person re-identification,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2019, pp. 3702–3712, doi: 10.1109/ICCV.2019.00380.
- S. Wang, B. Huang, H. Li, G. Qi, D. Tao, and Z. Yu, “Key point-aware occlusion suppression and semantic alignment for occluded person re-identification,” Information Sciences, vol. 606, pp. 669–687, 2022, doi: 10.1016/j.ins.2022.05.077.
- J. Zuo, C. Yu, N. Sang, and C. Gao, “Plip: Language-image pre-training for person representation learning,” arXiv preprint arXiv:2305.08386, 2023.
- V. Somers, C. De Vleeschouwer, and A. Alahi, “Body part-based representation learning for occluded person re-identification,” in Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, 2023, pp. 1613–1623, doi: 10.1109/WACV56688.2023.00166.
- B. X. Nguyen, B. D. Nguyen, T. Do, E. Tjiputra, Q. D. Tran, and A. Nguyen, “Graph-based person signature for person re-identifications,” in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2021, pp. 3492–3501, doi: 10.1109/CVPRW53098.2021.00388.
- Z. Dai, M. Chen, X. Gu, S. Zhu, and P. Tan, “Batch dropblock network for person re-identification and beyond,” in Proceedings of the IEEE/CVF international conference on computer vision, 2019, pp. 3691–3701, doi: 10.1109/ICCV.2019.00379.
- X. Jin, C. Lan, W. Zeng, Z. Chen, and L. Zhang, “Style normalization and restitution for generalizable person re-identification,” in proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2020, pp. 3143–3152, doi: 10.1109/CVPR42600.2020.00321.
- S. Liao and L. Shao, “Interpretable and generalizable person re-identification with query-adaptive convolution and temporal lifting,” in Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part XI 16. Springer, 2020, pp. 456–474, doi: 10.1007/978-3-030-58621-8_27.