Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
139 tokens/sec
GPT-4o
47 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Multi Player Tracking in Ice Hockey with Homographic Projections (2405.13397v1)

Published 22 May 2024 in cs.CV

Abstract: Multi Object Tracking (MOT) in ice hockey pursues the combined task of localizing and associating players across a given sequence to maintain their identities. Tracking players from monocular broadcast feeds is an important computer vision problem offering various downstream analytics and enhanced viewership experience. However, existing trackers encounter significant difficulties in dealing with occlusions, blurs, and agile player movements prevalent in telecast feeds. In this work, we propose a novel tracking approach by formulating MOT as a bipartite graph matching problem infused with homography. We disentangle the positional representations of occluded and overlapping players in broadcast view, by mapping their foot keypoints to an overhead rink template, and encode these projected positions into the graph network. This ensures reliable spatial context for consistent player tracking and unfragmented tracklet prediction. Our results show considerable improvements in both the IDsw and IDF1 metrics on the two available broadcast ice hockey datasets.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (55)
  1. L. Leal-Taixé, A. Milan, I. Reid, S. Roth, and K. Schindler, “Motchallenge 2015: Towards a benchmark for multi-target tracking,” 2015.
  2. A. Milan, L. Leal-Taixe, I. Reid, S. Roth, and K. Schindler, “Mot16: A benchmark for multi-object tracking,” 2016.
  3. P. Dendorfer, H. Rezatofighi, A. Milan, J. Shi, D. Cremers, I. Reid, S. Roth, K. Schindler, and L. Leal-Taixé, “Mot20: A benchmark for multi object tracking in crowded scenes,” 2020.
  4. P. Sun, J. Cao, Y. Jiang, Z. Yuan, S. Bai, K. Kitani, and P. Luo, “Dancetrack: Multi-object tracking in uniform appearance and diverse motion,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2022.
  5. A. Geiger, P. Lenz, and R. Urtasun, “Are we ready for autonomous driving? the kitti vision benchmark suite,” in Conference on Computer Vision and Pattern Recognition (CVPR), 2012.
  6. A. Krizhevsky, I. Sutskever, and G. Hinton, “Imagenet classification with deep convolutional neural networks,” Neural Information Processing Systems, vol. 25, 01 2012.
  7. M. Manafifard, H. Ebadi, and H. A. Moghaddam, “A survey on player tracking in soccer videos,” Computer Vision and Image Understanding, vol. 159, pp. 19–46, 2017.
  8. M. Buric, M. Ivasic-Kos, and M. Pobar, “Player tracking in sports videos,” in 2019 IEEE International Conference on Cloud Computing Technology and Science (CloudCom), 2019, pp. 334–340.
  9. M.-C. Hu, M.-H. Chang, J.-L. Wu, and L. Chi, “Robust camera calibration and player tracking in broadcast basketball video,” IEEE Transactions on Multimedia, vol. 13, no. 2, pp. 266–279, 2011.
  10. W.-L. Lu, J.-A. Ting, J. J. Little, and K. P. Murphy, “Learning to track and identify players from broadcast sports videos,” IEEE transactions on pattern analysis and machine intelligence, vol. 35, no. 7, pp. 1704–1716, 2013.
  11. H.-T. Chen, C.-L. Chou, T.-S. Fu, S.-Y. Lee, and B.-S. P. Lin, “Recognizing tactic patterns in broadcast basketball video using player trajectory,” Journal of Visual Communication and Image Representation, vol. 23, no. 6, pp. 932–947, 2012.
  12. M. Takahashi, K. Ikeya, M. Kano, H. Ookubo, and T. Mishina, “Robust volleyball tracking system using multi-view cameras,” in 2016 23rd International Conference on Pattern Recognition (ICPR).   IEEE, 2016, pp. 2740–2745.
  13. Y. Cui, C. Zeng, X. Zhao, Y. Yang, G. Wu, and L. Wang, “Sportsmot: A large multi-object tracking dataset in multiple sports scenes,” 2023.
  14. A. Cioppa, S. Giancola, A. Deliege, L. Kang, X. Zhou, Z. Cheng, B. Ghanem, and M. Van Droogenbroeck, “Soccernet-tracking: Multiple object tracking dataset and benchmark in soccer videos,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022, pp. 3491–3502.
  15. K. Okuma, A. Taleghani, D. Freitas, J. Little, and D. Lowe, “A boosted particle filter: Multitarget detection and tracking,” vol. 3021, 05 2004.
  16. P. Viola and M. Jones, “Rapid object detection using a boosted cascade of simple features,” in Proceedings of the 2001 IEEE Computer Society Conference on Computer Vision and Pattern Recognition. CVPR 2001, vol. 1, 2001, pp. I–I.
  17. Vermaak, Doucet, and Perez, “Maintaining multimodality through mixture tracking,” in Proceedings Ninth IEEE International Conference on Computer Vision, 2003, pp. 1110–1116 vol.2.
  18. Y. Cai, N. Freitas, and J. Little, “Robust visual tracking for multiple targets,” 05 2006, pp. 107–118.
  19. K. Vats, M. Fani, D. A. Clausi, and J. S. Zelek, “Evaluating deep tracking models for player tracking in broadcast ice hockey video,” 2022.
  20. G. Brasó and L. Leal-Taixé, “Learning a neural solver for multiple object tracking,” 2020.
  21. K. Vats, P. Walters, M. Fani, D. A. Clausi, and J. Zelek, “Player tracking and identification in ice hockey,” 2021.
  22. J. C. Shang, Y. Chen, M. J. Shafiee, and D. A. Clausi, “Rink-agnostic hockey rink registration,” 2023.
  23. P. W. Battaglia, J. B. Hamrick, V. Bapst, A. Sanchez-Gonzalez, V. Zambaldi, M. Malinowski, A. Tacchetti, D. Raposo, A. Santoro, R. Faulkner, C. Gulcehre, F. Song, A. Ballard, J. Gilmer, G. Dahl, A. Vaswani, K. Allen, C. Nash, V. Langston, C. Dyer, N. Heess, D. Wierstra, P. Kohli, M. Botvinick, O. Vinyals, Y. Li, and R. Pascanu, “Relational inductive biases, deep learning, and graph networks,” 2018.
  24. J. Gilmer, S. S. Schoenholz, P. F. Riley, O. Vinyals, and G. E. Dahl, “Neural message passing for quantum chemistry,” 2017.
  25. S. Ren, K. He, R. B. Girshick, and J. Sun, “Faster R-CNN: towards real-time object detection with region proposal networks,” CoRR, vol. abs/1506.01497, 2015. [Online]. Available: http://arxiv.org/abs/1506.01497
  26. Y.-H. Wang, J.-W. Hsieh, P.-Y. Chen, M.-C. Chang, H. H. So, and X. Li, “Smiletrack: Similarity learning for occlusion-aware multiple object tracking,” 2024.
  27. K. Yi, K. Luo, X. Luo, J. Huang, H. Wu, R. Hu, and W. Hao, “Ucmctrack: Multi-object tracking with uniform camera motion compensation,” 2024.
  28. Y. Zhang, P. Sun, Y. Jiang, D. Yu, F. Weng, Z. Yuan, P. Luo, W. Liu, and X. Wang, “Bytetrack: Multi-object tracking by associating every detection box,” 2022.
  29. P. Bergmann, T. Meinhardt, and L. Leal-Taixe, “Tracking without bells and whistles,” in 2019 IEEE/CVF International Conference on Computer Vision (ICCV).   IEEE, oct 2019. [Online]. Available: https://doi.org/10.1109%2Ficcv.2019.00103
  30. P. Chu, J. Wang, Q. You, H. Ling, and Z. Liu, “Transmot: Spatial-temporal graph transformer for multiple object tracking,” 2023 IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), pp. 4859–4869, 2021.
  31. A. Bewley, Z. Ge, L. Ott, F. Ramos, and B. Upcroft, “Simple online and realtime tracking,” CoRR, vol. abs/1602.00763, 2016. [Online]. Available: http://arxiv.org/abs/1602.00763
  32. N. Wojke, A. Bewley, and D. Paulus, “Simple online and realtime tracking with a deep association metric,” CoRR, vol. abs/1703.07402, 2017. [Online]. Available: http://arxiv.org/abs/1703.07402
  33. Y. Zhang, C. Wang, X. Wang, W. Zeng, and W. Liu, “Fairmot: On the fairness of detection and re-identification in multiple object tracking,” International Journal of Computer Vision, vol. 129, no. 11, 2021. [Online]. Available: http://dx.doi.org/10.1007/s11263-021-01513-4
  34. R. E. Kalman, “A new approach to linear filtering and prediction problems,” 1960.
  35. H. Kuhn, “The hungarian method for the assignment problem,” Naval Research Logistic Quarterly, vol. 2, 05 2012.
  36. X. Zhou, D. Wang, and P. Krähenbühl, “Objects as points,” CoRR, vol. abs/1904.07850, 2019. [Online]. Available: http://arxiv.org/abs/1904.07850
  37. Z. Liu, X. Wang, C. Wang, W. Liu, and X. Bai, “Sparsetrack: Multi-object tracking by performing scene decomposition based on pseudo-depth,” 2023.
  38. S. Iwase and H. Saito, “Parallel tracking of all soccer players by integrating detected positions in multiple view images,” in Proceedings of the 17th International Conference on Pattern Recognition, 2004. ICPR 2004., vol. 4, 2004, pp. 751–754 Vol.4.
  39. M. Xu, J. Orwell, and G. Jones, “Tracking football players with multiple cameras,” in 2004 International Conference on Image Processing, 2004. ICIP ’04., vol. 5, 2004, pp. 2909–2912 Vol. 5.
  40. P. Nillius, J. Sullivan, and S. Carlsson, “Multi-target tracking - linking identities using bayesian network inference,” in 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’06), vol. 2, 2006, pp. 2187–2194.
  41. P. Figueroa, N. Leite, R. Barros, I. Cohen, and G. Medioni, “Tracking soccer players using the graph representation,” in Proceedings of the 17th International Conference on Pattern Recognition, 2004. ICPR 2004., vol. 4, 2004, pp. 787–790 Vol.4.
  42. J. Redmon, S. K. Divvala, R. B. Girshick, and A. Farhadi, “You only look once: Unified, real-time object detection,” 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 779–788, 2015.
  43. D. Acuna, “Towards real-time detection and tracking of basketball players using deep neural networks.” [Online]. Available: https://api.semanticscholar.org/CorpusID:31248790
  44. R. Theagarajan and B. Bhanu, “An automated system for generating tactical performance statistics for individual soccer players from videos,” IEEE Transactions on Circuits and Systems for Video Technology, vol. 31, no. 2, pp. 632–646, 2021.
  45. Y. Wang, K. Kitani, and X. Weng, “Joint object detection and multi-object tracking with graph neural networks,” 2021.
  46. E. Luna, J. C. SanMiguel, J. M. Martínez, and P. Carballeira, “Graph neural networks for cross-camera data association,” 2022.
  47. C.-C. Cheng, M.-X. Qiu, C.-K. Chiang, and S.-H. Lai, “Rest: A reconfigurable spatial-temporal graph model for multi-camera multi-object tracking,” 2023.
  48. K. Zhou, Y. Yang, A. Cavallaro, and T. Xiang, “Omni-scale feature learning for person re-identification,” in ICCV, 2019.
  49. D. Hendrycks and K. Gimpel, “Gaussian error linear units (gelus),” 2023.
  50. T.-Y. Lin, P. Goyal, R. Girshick, K. He, and P. Dollár, “Focal loss for dense object detection,” 2018.
  51. H. Prakash, Y. Chen, S. Rambhatla, D. Clausi, and J. Zelek, “Vip-htd: A public benchmark for multi-player tracking in ice hockey,” in Computer Vision and Intelligent Systems, 2024.
  52. J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei, “Imagenet: A large-scale hierarchical image database,” in 2009 IEEE Conference on Computer Vision and Pattern Recognition, 2009, pp. 248–255.
  53. D. P. Kingma and J. Ba, “Adam: A method for stochastic optimization,” 2017.
  54. K. Bernardin and R. Stiefelhagen, “Evaluating multiple object tracking performance: The clear mot metrics,” EURASIP Journal on Image and Video Processing, vol. 2008, 01 2008.
  55. E. Ristani, F. Solera, R. S. Zou, R. Cucchiara, and C. Tomasi, “Performance measures and a data set for multi-target, multi-camera tracking,” 2016.

Summary

We haven't generated a summary for this paper yet.

X Twitter Logo Streamline Icon: https://streamlinehq.com